Case Study — Advogram

Finding a Paying Market for HR Tools in 6 Weeks

How I ran a $3,430 GTM experiment that validated the ATS niche, screened 10 geos for purchasing power, and built the unit economics for the next stage — a paid A/B test on Western markets.

Andrey Rogovsky
8 min read· Updated
Live experiment · Apr 2026

6 weeks

Experiment duration

$3,430

Total ad spend

337

Sign-ups

$0.32

CPA in France

15–19%

CTR on ATS keywords

337 sign-ups. 10 geos. 1 paying market. All for $3,430 and 6 weeks.

Advogram is an open-source browser extension for IT, Design, and Marketing job seekers. It filters fake postings, checks resume ATS compatibility, and tracks application status. The product was built. The question was: who actually needs it, where are they, and will they pay? Instead of surveys, I ran a paid traffic experiment across an unrestricted global geo — letting real search behavior answer the question.

I used $3,430 in Google Ads as a fast market signal — not to acquire paying users, but to answer three questions: Is there real demand for ATS resume tools? Who is actually searching? And which geos will pay? Six weeks later: demand confirmed, 9 out of 10 geos screened out, and a clear path to $0.32–3 CPA on Western traffic.

What the Experiment Was Designed to Answer#

Three research questions drove every decision in the campaign setup:

1

Is there real, measurable search demand for ATS resume tools — or is it a niche only discussed in forums?

2

Which geographies are organically searching for this? No geo restrictions at launch — let the data show where the audience lives.

3

Among those geos, which ones have the GDP and SaaS spending behavior to actually convert to paid users at $9–15/mo?

4

What does a realistic unit economics model look like — CPA, LTV, payback period — if we focus budget on the right markets?

Why not run user interviews first?

Surveys tell you what people say they want. Search behavior tells you what they actually look for. At $0.87 avg CPC, paid traffic was the cheapest way to get statistically meaningful signal fast.

What happens after the GTM experiment?

The next phase is a paid A/B test on UK, DE, FR, CA, AU — validating willingness to pay at $9 vs $15/mo on traffic that already converts at low CPA.

Why Not Just Build and See What Happens?#

Three reasons to run a structured experiment before investing in growth:

Niche validation risk

ATS checker tools like Jobscan and Resume Worded already exist. The question wasn't "does the niche exist?" but "is there unsatisfied demand, and at what CPC?" A 15–19% CTR on ATS keywords — vs the 3–5% B2C SaaS average — answered that clearly.

Geo selection risk

Without data, the assumption would have been to target the US. The experiment revealed that India alone drove 59% of all conversions — a market with $2,400 GDP/capita. Targeting the US blindly would have meant ignoring France at $0.32 CPA.

Monetization risk

Freemium only works if a subset converts to paid. Knowing that 9 of 10 organic geos are low-purchasing-power markets fundamentally changes the monetization strategy: you cannot price globally, you must target selectively.

The experiment cost $3,430. Finding this out post-launch with an acquired user base that won't pay would have cost far more.

Experiment Setup#

One Google Ads campaign. Unrestricted geo targeting. Conversion event: Advogram Screener sign-up. Six weeks of data collection.

$3,430

Total spend

25.7K

Impressions

3,930

Clicks

337

Conversions

Campaign

"observer" — single campaign, all ad groups enabled, broad geo targeting.

Keywords

ATS resume checker cluster: ats scanner, ats resume checker, ats score checker, resume checker, jobscan alternatives.

Conversion event

Advogram Screener sign-up — tracked via Screener analytics (Interval: Weekly).

Performance window

Mar 16 – Apr 19, 2026. Peak sign-up week: March 16 (~110 sign-ups in week 1).

Avg. CPC

$0.87 — well below the $2–4 typical range for B2C SaaS in this category.

Optimization score

99.9% — campaign was structurally sound throughout the experiment.

What the Data Showed#

Four findings, each actionable.

Finding 1 — The ATS Niche Is Real and Undersupplied

A 15–19% CTR is exceptional for B2C SaaS. B2C search averages 3–5%. This signal means users searching for ATS tools are not finding what they need — and clicking aggressively when they see something relevant.

CTR on ATS keywords: 15–19%
Google AdsScreener Analytics

ats scanner: $952 spend · 1,230 clicks · 13.2% CTR

ats resume checker: $795 spend · 995 clicks · 17.7% CTR

ats score checker: $400 spend · 476 clicks · 18.5% CTR

ats score: $238 spend · 268 clicks · 18.8% CTR

best resume: $398 spend · 236 clicks · 15.7% CTR

Competitors appear in the search landscape (Jobscan, Resume Worded, Skillsyncer) — but CTR this high indicates Advogram's positioning is differentiated enough to outperform them on relevance.

Conclusion: the market is actively searching, not just browsing. Demand is pull-based, not push-required.

Finding 2 — 9 of 10 Organic Geos Are Non-Paying Markets

The experiment ran globally. 10 countries generated conversions. Cross-referencing CPA with GDP/capita and SaaS spending behavior produced a clear split: one viable market, one segmentation opportunity, eight geos to cut.

10 geos converted. 1 is viable for paid.
Google AdsGDP data

France: 3 conv. · $0.32 CPA · ~$45K GDP/capita → Viable. Highest purchasing power in the set. CPA 10–30x cheaper than all other geos.

China: 2 conv. · $1.21 CPA · ~$13K GDP/capita → Uncertain. Middle class exists but VPN dependency and payment friction reduce conversion probability.

India: 199 conv. · $3.08 CPA · ~$2,400 GDP/capita → Segment only. 59% of all conversions, but mass market won't pay $9–15/mo. Senior IT profiles in Bangalore/Mumbai are the viable slice.

Egypt: 8 conv. · $2.89 CPA · ~$3,500 GDP/capita → Cut.

Pakistan: 6 conv. · $2.58 CPA · ~$1,600 GDP/capita → Cut.

Bangladesh: 7 conv. · $3.90 CPA · ~$2,700 GDP/capita → Cut.

Syria: 2 conv. · $3.25 CPA · <$1,000 GDP/capita → Cut.

Myanmar: 1 conv. · $1.82 CPA · ~$1,200 GDP/capita → Cut.

Algeria: 1 conv. · $1.27 CPA · ~$4,000 GDP/capita → Cut.

Indonesia: 1 conv. · $3.42 CPA · ~$4,900 GDP/capita → Cut.

Key insight: Western traffic was almost absent in the experiment — not because it doesn't exist, but because geo targeting was unrestricted and Eastern markets dominate volume. France at $0.32 CPA is a signal, not an outlier.

Finding 3 — Audience Profile

The data paints a clear picture of who is searching. This profile directly informs the next campaign's audience targeting.

Demographics + device + time-of-day data
Google Ads DemographicsDay & Hour data

Primary audience: men, 18–34 years old

Primary device: desktop/laptop — 98% of clicks. This is a research-mode query, not mobile-impulse.

Peak activity: Mon–Fri, 10:00–18:00 — active job seekers during work hours, likely searching while employed and looking to switch

Top signal geos for bid strategy: India, Egypt, Dhaka Division — useful for exclusion in the next phase

Implication: the product experience must be desktop-first. A browser extension already fits this profile natively.

Finding 4 — Retention Is the Next Problem to Solve

337 users signed up. Zero returned after 8 weeks. This is not a product failure signal — it is an activation and habit-formation signal. The acquisition funnel works. The post-signup flow does not.

8-week cohort retention: 0%
Screener Analytics

Weekly cohort retention chart shows a sharp drop to 0% by W1 and holds flat through W8.

Most likely cause: single-use behavior. Users check their resume ATS score once, get the result, and leave. No trigger brings them back.

This is structural: the product needs either a repeating use case (new job alert, weekly score update) or an email/push re-engagement sequence.

Resolution before scaling: fix retention before spending on Western paid traffic. Acquiring $2–3 CPA users who churn at W1 is still a losing unit economics.

Positive framing: the acquisition mechanism is validated. The retention problem is a product problem — solvable independently of market validation.

Unit Economics Model#

Built from experiment data. Three scenarios for the next phase — paid traffic focused on UK, DE, FR, CA, AU.

CPA Benchmark — France as the Floor

France produced $0.32 CPA on 3 conversions — statistically thin but directionally strong. Extrapolating to similar Western geos gives a realistic CPA range for the paid A/B test.

France CPA (actual): $0.32 · 3 conversions · $0.96 total spend

Estimated CPA for UK/DE/CA/AU: $2–3 based on typical HR SaaS CPCs in those markets ($1.20–1.80 avg CPC, 15–25% landing-to-signup conversion)

Note: France's 150% conversion rate is a small-sample artifact — likely retargeting or direct navigation. At scale, model with 15–25% conversion from click to signup.

Scaling Scenarios

Three scenarios for the next phase, assuming a $5,000/mo budget on Western geos and a 3-month average subscription duration (natural job search cycle).

Conservative: CPA $2–3 · Subscription $9/mo · LTV $27 → ROI 9x · Payback: month 1

Base: CPA $2–3 · Subscription $15/mo · LTV $45 → ROI 15x · Payback: month 1

Optimistic (SEO + Ads): CPA $1–2 · Subscription $19/mo · LTV $57+ → ROI 28x+ · Payback: month 1

At $5K/mo budget on Western geos: ~1,500–2,500 new sign-ups. At 5–10% paid conversion: 75–250 paying users.

200 paying users at $15/mo = $3,000 MRR. Breakeven on ad spend.

Churn is natural and expected: users find a job, cancel. This is a feature of the niche, not a product problem. Model assumes 3-month cohort LTV, not retention-based compounding.

What Needs to Be True for This to Work

Three conditions that must hold for the unit economics to land.

1. Retention fix ships before paid scaling: without at least W2 retention > 0%, even low CPA users don't compound into MRR.

2. Western CPA lands in the $2–3 range: France at $0.32 is the floor signal; the ceiling assumption is $3. If Western CPA exceeds $5, the base scenario breaks.

3. Paid conversion rate ≥ 5%: industry benchmark for freemium → paid in productivity tools. If lower, the subscription price needs to increase or the free tier needs to be narrowed.

Next Steps

The experiment is complete. The roadmap for the next phase.

Cut all non-viable geos from targeting: IN (mass), EG, PK, BD, SY, MM, DZ, ID

Launch geo-focused campaigns on UK, DE, FR, CA, AU with $5K/mo budget

Run pricing A/B test: $9 vs $15 vs $19/mo on Western traffic

Ship retention mechanism before paid scale: email onboarding sequence, job alert feature, or weekly ATS score digest

India segmentation: test Senior/Lead IT targeting in Bangalore and Mumbai — the paying slice within a high-volume market

SEO channel: target "ats resume checker" and "jobscan alternative" keywords organically — CAC = $0 at scale

What $3,430 Bought#

A structured GTM experiment produces answers, not users. Here is what each dollar of the budget returned:

ModuleBeforeAfterMonthly
Niche demand validationAssumptionConfirmed (CTR 15–19%)Priceless
Keyword cluster identificationUnknown5 high-intent clusters mappedShapes SEO roadmap
Geo screening (10 markets)Unknown9 cut, 1 validatedSaves wasted ad spend
Audience profileHypothesisMen 18–34, desktop, M–F office hoursTargeting precision
CPA floor (France)Unknown$0.32 actual, $2–3 modeled for WEUnit econ foundation
Retention gap identificationUnknown0% W8 retention — product problem scopedPrevents premature scaling
Unit economics modelNoneConservative/base/optimistic builtFundraising / partner-ready
Total$3,430 · 6 weeks

A B2B market research agency would charge $30,000–50,000 for a competitor analysis and geo assessment of this depth — without the real conversion data. Paid traffic as research is the GTM engineer's unfair advantage.

Before vs After the Experiment#

Niche confidence

Hypothesis: people need ATS tools

Validated: 15–19% CTR, 3,930 clicks, 337 sign-ups in 6 weeks

Geo targeting

Unknown — probably US-first assumption

Data-driven: 9 geos cut, Western markets identified as the paid opportunity

CPA model

No benchmark

France at $0.32 CPA · modeled $2–3 for UK/DE/FR/CA

Monetization strategy

Generic freemium

Geo-selective: free for emerging markets, paid A/B test for Western

Product priorities

Feature roadmap based on assumptions

Retention mechanism is the #1 priority before any paid scaling

Decision Log#

Every methodological choice has a rationale.

Why paid traffic instead of organic or cold outreach?

Speed and signal quality. Organic SEO takes 3–6 months to produce data. Cold outreach gives opinions, not behavior. At $0.87 avg CPC, Google Ads produced 3,930 behavioral data points in 6 weeks — people who searched, read the ad, and clicked. That's the strongest pre-purchase signal available.

Why unrestricted geo targeting?

Restricting geo at launch would have confirmed the assumption, not tested it. Running globally first revealed that the natural audience is overwhelmingly South Asian and Middle Eastern — a finding that completely changes the monetization approach. You cannot discover what you exclude.

Why one campaign instead of multiple A/B tests?

This was a discovery phase, not an optimization phase. One campaign with broad targeting produces a map of where demand lives. Optimization — multiple ad sets, geo bids, price testing — is the next phase, informed by this map.

Why is 0% retention not a failure signal?

Retention requires a product that creates a habit. ATS checking is currently a one-shot task: check score, leave. The experiment was never designed to test retention — it was designed to test acquisition. The 0% is a product backlog item, not a market rejection signal.

Why France at $0.32 CPA and not the other way around?

France produced an anomalously low CPA because the volume was low (3 conversions) — likely retargeting or branded search. The correct read is not "France will always be $0.32" but "Western markets show CPA potential orders of magnitude lower than the campaign average." The experiment confirms the direction; the scaled test will calibrate the number.

Why fix retention before scaling paid?

Leaky bucket math: if W1 retention is 0%, every new user is a one-time cost with zero revenue compounding. At $2–3 CPA and $0 LTV beyond week 1, the unit economics break regardless of how cheap the CPA is. Retention is the gating condition for paid scale.

Takeaways#

1

Use paid traffic as a research tool, not just an acquisition channel.

At <$1 CPC, Google Ads is one of the cheapest ways to run a behavioral market study. The $3,430 budget produced geo distribution, audience demographics, keyword demand curves, and CPA benchmarks — all from real user behavior.

2

High CTR is a stronger validation signal than high conversion.

A 15–19% CTR means the search intent and the ad copy are aligned with an unmet need. Conversion rate can be improved through landing page optimization. A low CTR means the market doesn't recognize the product — much harder to fix.

3

Volume geos and paying geos are almost never the same.

India produced 59% of conversions and almost zero paying potential. France produced <1% of conversions and the lowest CPA in the set. Scaling for volume optimizes for the wrong signal if the goal is revenue.

4

Scope the retention problem before writing the growth roadmap.

A product that doesn't create a returning habit cannot compound CAC into LTV. Identifying the retention gap before scaling is the difference between a growth engine and a leaky bucket.

Experiment Timeline#

Six weeks from zero data to a validated GTM hypothesis.

Mar 16

Campaign launch

Single campaign, unrestricted geo, ATS keyword cluster. Screener analytics connected.

Mar 16–23

Peak sign-up week

~110 sign-ups in the first 7 days. Demand signal confirmed immediately.

Mar 23–Apr 6

Churn pattern emerges

Active user count drops. First indication of single-use behavior and retention gap.

Apr 6–19

Geo and keyword data matures

France CPA anomaly identified. India volume dominance confirmed. Keyword CTR benchmarks stable.

Apr 19

Campaign data cutoff

Final numbers: 3,930 clicks · 337 sign-ups · $3,430 spend · 10 geos mapped.

Apr 27

Analysis complete

Unit economics modeled. Next-phase roadmap scoped. Retention fix identified as gate condition.

The experiment answered what it was designed to answer.

The experiment ran for 6 weeks.

The data answered all three research questions.

The next phase is already scoped.

The only variable left is execution.

Transferable Methodology#

This GTM experiment structure — paid traffic as market signal, unrestricted geo discovery, CPA-to-GDP screening — works for any B2C or prosumer SaaS in a niche with measurable search demand.

Developer tools

Same structure: run unrestricted on Stack Overflow Ads or Google, identify which geos produce the lowest CPA, cross-reference with company engineering budgets per region.

Productivity / career SaaS

ATS tools, resume builders, interview prep — all have the same geo split risk: high search volume in emerging markets, paying users concentrated in Western markets. Run the experiment before assuming global pricing.

Vertical B2B SaaS

Replace sign-up conversion with demo request or free trial. The geo screening logic is identical: CPA × GDP/capita × SaaS spending willingness produces a prioritized market map.

The experiment methodology is repeatable. What changes is the keyword cluster, the conversion event, and the geo set. The framework — use paid traffic to generate behavioral data before committing to a growth strategy — is domain-agnostic.

Building something and not sure which market to go after first?

I ran this experiment for Advogram and turned $3,430 of ad spend into a validated geo map, a CPA benchmark, and a unit economics model. The same methodology applies to any product with measurable search demand.

FAQ#

Why not just target the US from the start?

Because that assumption would have been wrong and expensive. The experiment showed that the natural search audience for ATS tools is overwhelmingly South Asian and Middle Eastern — India delivered 59% of all conversions (199 out of 337 registrations). The US and Western Europe have lower search volume but far higher purchasing power. Without data, the assumption would have been to target the US from the start, ignoring France with $0.32 CPA — 10-30x cheaper than other geos. Blind US targeting would mean spending $2-4 CPA (typical for B2C SaaS) instead of finding markets with $0.32-1.21 CPA. You find this by looking at real data from unrestricted geo-targeting, not by assuming based on conventional wisdom about Western markets. The experiment cost $3,430 — learning this after launching with a user base that won't pay would have cost far more.

Is $3,430 a realistic budget for a validation experiment?

Yes — and possibly more than needed for initial signal. The key signal (CTR 15-19%, geo distribution, keyword performance) was visible within the first two weeks and ~$1,000 of spend. The full 6-week run (March 16 – April 19, 2026) produced cleaner data on retention and geo CPA benchmarks for 10 countries. Result: 3,930 clicks, 337 registrations, CPA $0.87 — significantly below typical $2-4 range for B2C SaaS. Optimization score 99.9% means the campaign was structurally sound. For comparison, learning this after launching with a user base that won't pay would cost far more. $3,430 bought a validated geo map, CPA benchmark, and unit economics model — that's cheap for de-risking a growth strategy before scaling.

What does the 0% retention actually mean?

It means no user returned to the Screener after their first week out of 337 registered. Weekly cohort retention chart shows sharp drop to 0% at W1 and stays flat through W8. It does not mean the product is broken — it means the current product is a one-shot tool, not a recurring habit. Most likely cause: users check their ATS resume score once, get the result, and leave. There's no trigger bringing them back. This is structural: the product needs either a repeating use case (new job alert, weekly score update) or email/push re-engagement sequence. Positive framing: acquisition mechanism validated (CTR 15-19%, CPA $0.87). Retention problem is a product problem, solvable independently of market validation. Fix retention before spending on Western paid traffic — that's the gate condition for scaling.

Why France and not Germany or the UK?

France appeared in the data by chance through unrestricted geo-targeting — it was not targeted specifically. Result: 3 conversions at $0.32 CPA with GDP/capita ~$45K — highest purchasing power in the set and CPA 10-30x cheaper than all other geos. The insight is not target France — it's Western Europe converts at dramatically lower CPA than campaign average ($0.87). Key insight: Western traffic was nearly absent in the experiment not because it doesn't exist, but because geo-targeting was unrestricted and Eastern markets dominate by search volume. France at $0.32 CPA is a signal, not an outlier. The next phase will include UK, DE, FR, CA, and AU with targeted focus to calibrate which produces the best CPA at scale while maintaining high purchasing power.

What is the product actually charging now?

Advogram is currently free and open-source on GitHub. The paid A/B test ($9 vs $15 vs $19/mo) is the next phase, contingent on shipping the retention mechanism first. Logic: charging before solving retention would validate willingness to pay, but not the ability to compound it into MRR (monthly recurring revenue). If 0% of users return after W1, even 100% conversion rate on payment gives LTV of one month — that's losing unit economics at $2-4 CPA on Western markets. Fix retention first (repeating trigger: weekly score updates, job alerts, email sequence), then test pricing on an audience that returns. This gives clean data on willingness to pay without confounding with product activation problem. The experiment showed demand exists — now the product needs to match that demand with a repeating use case.

How does this experiment feed into a fundraising narrative?

It provides the three numbers investors ask for earliest: validated demand signal (CTR 15-19% vs average 3-5% for B2C SaaS), CAC benchmark (France CPA $0.32, average $0.87 vs typical $2-4), and LTV model (subscription $9-19/mo × avg duration after retention fix). Most pre-seed pitches have none of these from real data — only assumptions and projections. This experiment produces all three for under $4,000 of spend. Additionally: geo map from 10 countries shows where to scale (France, UK, DE) and where not to spend (India mass market without segmentation, Egypt/Pakistan low purchasing power). This is an operational map for next phase, not a hypothesis. Investors see that founder knows how to validate assumptions cheaply before committing to expensive growth strategy. This de-risks their investment and shows data-driven approach to GTM.

Resources#

Andrey Rogovsky

Andrey Rogovsky

Senior AI Engineer · GenAI · MLOps · Cloud

25 years of infrastructure. Now I build AI that survives production with MCP + RAG + K8S.

More about the author →
© 2026 Andrey Rogovsky. All rights reserved.|Privacy