Why we killed the one-way video interview at Merra and built something else
CVs give you claims. One-way videos give you performances. Neither gives you evidence. Why we replaced both with conversational, evidence-based first-round screening on every applicant.
The first-round interview is becoming infrastructure
For most of the last decade, the first round of hiring has been held together by tape.
A recruiter posts a role. A few hundred CVs come in. Someone, usually a junior recruiter or an automated keyword filter, decides which ones look credible enough to advance. Some companies bolt on a one-way video tool to make the next step feel more rigorous. A candidate sits alone in front of a camera, gets sixty seconds to think, and ninety seconds to talk. The video gets watched at 1.5x speed, often by someone who has already half-decided based on the CV.
That is the system. It's the system at startups, at scale-ups, at the Fortune 500. It's the system at companies hiring 50 people a year and the ones hiring 5,000.
And the people inside it know it's broken. Recruiters know it. Hiring managers know it. Candidates definitely know it.
We started Merra because the first-round interview is becoming infrastructure, the same way background checks and ATSs became infrastructure ten years ago. And the version of that infrastructure most companies are running today doesn't deserve the name.
What CVs and one-way video actually give you
It's worth being honest about the two things most companies use to filter candidates today, because both are doing less than people think.
CVs are claim-based
A CV is a claim. "Led a team of 8." "Grew revenue 40%." "Managed a $2M budget." Those are not facts. They're assertions written by the candidate, often with help from a friend, a coach, or, increasingly, ChatGPT.
CV screening, even AI-powered CV screening, is a faster way to evaluate claims. It is not a way to evaluate people. The strongest candidates are often the worst self-promoters. The weakest candidates often have the most polished CVs. Anyone who has hired at volume has lived this.
One-way video is performance-based but shallow
One-way video was supposed to fix this. Instead of reading claims, you'd see candidates speak.
It didn't fix it. It introduced new problems and kept most of the old ones.
It's awkward. Talking to a camera with a countdown timer is the most artificial communication setup most candidates will ever face. You're not seeing how they communicate. You're seeing how they perform under a uniquely uncomfortable format.
It punishes the wrong people. Neurodivergent candidates, candidates with English as a second language, and candidates without quiet home setups are penalised by the format itself, not by their fit for the role.
It still gets watched at 1.5x. Recruiters skim videos the same way they skim CVs. The output looks more rigorous. The decision is the same gut call.
It produces almost no evidence. A recording exists. There is rarely a transcript. There is no scored evaluation tied to the role's actual focus areas. Try going back six months later and explaining why you advanced one candidate and not another. There is nothing to show.
The one-way video interview isn't dead because the format disappeared. It's dead because candidates and recruiters both quietly stopped believing in it.
Manual first-round calls don't scale
The other option, the one most experienced TA leaders quietly believe is the only honest one, is to do every first round as a real human conversation. A 20-minute call. A structured rubric. Notes.
That works. We've all seen it work. The problem is it doesn't survive contact with reality at any kind of volume. One recruiter, 30 open reqs, 200 applicants per req. The maths doesn't work. So shortcuts get taken. The bar gets quietly lowered. Hiring managers complain that the shortlist looks weaker than it used to. Recruiters burn out. Time-to-fill creeps up.
Every senior TA leader I talk to knows this. None of them have a good answer.
The real bar: evidence
The argument I want to make in this post is simple.
The job of the first-round interview is not to filter. It's to produce evidence.
Evidence that a hiring manager can look at and make a decision from. Evidence that holds up six months later when someone asks why a particular candidate did or didn't advance. Evidence that's the same shape for every candidate, so comparisons are actually meaningful.
For every role, you should walk away from the first round with four things on every applicant:
A recording of an actual conversation. Not a monologue.
A full transcript, searchable and reviewable.
A structured evaluation, scored against the focus areas the hiring manager actually cares about.
A decision summary that explains why the score is what it is, in language a human can defend.
That is what we mean by evidence-based first-round screening. It's a different category from CV screening. It's a different category from one-way video. It's a different category from chatbots. It's the category we think first-round interviewing is becoming.
Why "AI screening" alone isn't enough
A lot of tools today call themselves "AI screening." Most of them are doing one of two things:
AI resume screening. Parsing CVs faster. Ranking applicants faster. Filtering before a human ever sees them. This makes the same weak claim-based decision faster. It does not make it better.
AI chatbots. Asking candidates a few questions over text. Useful for logistics. Not evidence of how someone thinks under pressure or communicates in a real conversation.
Neither of these creates evidence. They create speed.
Speed without evidence is just faster guessing.
The interesting question is not how do we screen candidates faster? It's how do we get better data before a human makes a decision?
That's where Merra started.
What we built instead
Merra runs a 10–15 minute structured AI video conversation with every single applicant for a role. Not the top 20%. Every applicant.
The interviewer agent is configured by the hiring team for each role: focus areas, must-haves, calibration questions, and tone. It conducts a real two-way conversation. It follows up. It probes when an answer is thin. It scores against the focus areas, not a generic rubric.
When the round closes, the hiring team logs in to Merra and the shortlist is right there. Every candidate, ranked. For each one:
The full recording of the conversation.
The full transcript.
A scored evaluation across the focus areas, out of 100.
An overall match score and decision summary.
The hiring manager doesn't have to take the AI's word for anything. The evaluation is evidence first. The score is just a summary of the evidence underneath it.
This is the part most people don't realise until they see it: the most useful artefact isn't the score. It's the audit trail. Six months after the role closes, you can pull any candidate's evidence pack and see exactly what they said, how they said it, and why they were or weren't advanced.
That's not a faster CV screen. That's a different category.
"But can we trust the score?"
This is the right question. It's the first question every serious TA leader asks. We expect it.
The honest answer is: don't trust the score. Trust the evidence.
The score exists to make the evidence sortable. The transcript and the recording are what you actually defend a decision with.
That said, we've also stress-tested the scoring directly. We took a single transcript and ran it through Merra's evaluation 10 times. The scores landed in a tight range, with a 95% confidence interval of about ±1.2 points out of 100. We're going to publish the full data in a separate post. The point isn't "AI is perfect." The point is that the variance is known and bounded, and lower than the variance you'd see between two human interviewers scoring the same call.
The goal isn't to remove humans from hiring. It's to give humans better data to decide on.
What this changes downstream
When the first round produces real evidence on every applicant, a few things change that matter more than they sound:
Hiring manager confidence goes up. They are looking at conversations, not CVs. The shortlist feels different.
Time-to-shortlist collapses. Not because anyone is moving faster. Because the bottleneck (recruiter capacity to do first rounds) is gone.
Bias surface area shrinks. Every candidate gets the same structured conversation, the same focus areas, the same scoring. Not bias-free. But the bias is legible and addressable.
Candidate experience improves. This is the part that surprised us. Candidates consistently say the conversation feels less performative than a one-way video and less stressful than a phone screen. They get to be themselves for 10 minutes. Then they get feedback.
You build a real talent database. Every candidate who interviews, even if they don't advance, leaves behind structured evidence you can search later. Most companies discard this signal. It's some of the most valuable data in the business.
The category we're trying to build
We are not trying to be a faster CV screener. We are not trying to be a better one-way video tool. We are not trying to replace your ATS.
We are trying to make evidence-based first-round screening the standard for how serious hiring teams run the top of the funnel.
That means:
CV screening becomes the triage layer (logistics: right to work, location, comp band).
The first evaluative step is a structured conversation with evidence.
The hiring manager spends their time on candidates with real signal, not on guesswork.
The audit trail is built in, not bolted on.
That's the version of first-round interviewing we want to be infrastructure five years from now. Same way background checks are infrastructure today.
What this means for you, practically
If you run TA at a company doing real volume, here's how to think about whether this matters yet:
If your recruiters are owning more than 20 reqs each, evidence-based first-round screening is no longer a nice-to-have. It's how you get capacity back without hiring more recruiters.
If you're using a one-way video tool today, ask yourself when you last successfully defended an advance/reject decision using the recording six months later. If the answer is "never," the format isn't producing evidence.
If your hiring managers complain that the shortlist quality has dropped, the problem is almost never the recruiters. It's that the first round isn't producing enough signal for anyone to make a good decision.
If your candidate experience scores are sliding, the one-way video step is almost certainly the cause. Candidates have been telling you for years.
Run a pilot on one role
The fastest way to understand what evidence-based first-round screening actually looks like is to run it on one of your live roles.
We'll set up Merra for a single requisition, run every applicant through a structured 10–15 minute video interview, and hand you the ranked shortlist with the recording, transcript, and scored evaluation for every candidate.
If the evidence pack is better than what your current first round produces, you'll know in a week.
Run a pilot on one role and see the evidence pack Merra gives your team.
Ready to hire faster with AI?
See how Merra helps teams screen candidates in minutes, not weeks.
Request a DemoKeep reading
AI screening software: a buyer's guide for TA leaders
Most AI screening tools sell speed. The ones worth buying produce evidence. Here are the seven questions every TA leader should ask before signing a contract.
How AI is transforming the hiring process in 2026
Most '2026 hiring' posts run on hype. Here's what's actually changed, what hasn't, and how to evaluate AI interviewing tools — from a founder building one.