If you've been in recruiting long enough, you've watched an interviewer walk into a room, glance at a resume, and wing it. "So... tell me about yourself." Forty-five minutes later, the feedback is: "Good culture fit" or "Didn't feel right." No notes on what was asked, no scoring, no way to compare that candidate to the next one.
This is the unstructured interview. It's the default at most companies, and decades of research say it's one of the least effective ways to predict whether someone will actually succeed in the role.
What makes an interview "structured"
A structured interview has three ingredients:
- Predetermined questions — every candidate is asked the same questions, in the same order, targeting the same competencies.
- A scoring rubric — each answer is rated against a defined scale (not gut feel).
- Consistent evaluation — the same criteria and weights apply to every candidate.
An unstructured interview, by contrast, is a conversation. The interviewer decides what to ask on the fly, evaluates candidates based on overall impression, and scoring (if it happens at all) is subjective.
Most real-world interviews fall somewhere in between — maybe there's a list of suggested questions but no rubric, or a rubric exists but interviewers freestyle half the time.
What 100 years of research tells us
The most comprehensive study on this topic comes from Frank Schmidt and John Hunter, who analyzed 85 years of personnel selection research across hundreds of studies and thousands of hires. Their findings, published in Psychological Bulletin, are the gold standard in industrial-organizational psychology.
Structured interviews have a validity coefficient of 0.51. That means they explain about 26% of the variance in actual job performance — making them one of the single best predictors available to hiring teams.
Unstructured interviews score 0.20. They explain roughly 4% of job performance variance. That's barely better than a coin flip with extra steps.
To put this in plain terms: if you're using unstructured interviews to screen candidates, you're getting about 40% of the predictive power you could be getting with a structured approach.
Other research backs this up:
- A meta-analysis by Wiesner and Cronshaw found structured interviews are twice as valid as unstructured ones across all job types.
- McDaniel et al. confirmed that situational and behavioral structured questions outperform conversational formats by a wide margin.
- Campion, Palmer, and Campion showed that adding structure to interviews consistently improved both reliability (interviewers agree) and validity (interviews predict performance).
This isn't a close call. The research consensus has been settled for decades.
Why unstructured interviews persist
If the evidence is this clear, why do most companies still use unstructured interviews? A few reasons:
"I'm a good judge of people." Most interviewers genuinely believe they can read candidates accurately. Research on interviewer confidence vs. accuracy says otherwise — there's essentially no correlation. The interviewers who are most confident in their judgment are not more accurate than anyone else.
It feels more natural. A structured interview can feel rigid to both sides. Interviewers worry about sounding robotic, and there's a fear that candidates won't get to "be themselves." But natural doesn't mean effective. Natural conversations favor charismatic, extroverted candidates — not necessarily the most competent ones.
Nobody built the rubric. Creating good structured interview questions and scoring rubrics takes real effort upfront. Many teams skip it because they're busy filling roles and it feels like overhead. The irony is that skipping structure costs far more time in bad hires and do-over searches.
Hiring managers resist it. Some managers see structured interviews as taking away their autonomy. They want to ask their questions, follow their instincts. This is understandable — but it's also why the same team can interview the same candidate and come back with completely opposite recommendations.
The real cost of unstructured interviews
Beyond predictive validity, unstructured interviews create practical problems:
You can't compare candidates. If Candidate A was asked about leadership and Candidate B was asked about time management, you're comparing apples to office chairs. There's no common baseline.
Legal exposure increases. Unstructured interviews are harder to defend in discrimination claims because there's no documented, consistent process. Every question is ad hoc, which makes it difficult to prove the process was fair.
Bias runs unchecked. Without a rubric, interviewers default to heuristics — affinity bias (they remind me of myself), halo effect (they went to a great school, so they must be great at everything), and first-impression bias (the first 30 seconds determine the outcome). Structured interviews don't eliminate bias, but they dramatically reduce it by forcing attention to job-relevant criteria.
Your team's time is wasted. Think about the hours your team spends debriefing after interviews. Without structured data, debrief meetings devolve into storytelling: "I liked when they said..." "They seemed nervous to me..." These conversations can go in circles because there's no shared framework for evaluation.
What switching to structured interviews looks like
Moving from unstructured to structured doesn't require a year-long transformation. Here's the practical version:
Step 1: Define what you're evaluating
Before writing a single question, agree on the 3-5 competencies that actually matter for the role. Not a wish list of 15 traits — the core capabilities that separate good from great in this specific position.
For a customer support manager, that might be:
- Conflict resolution ability
- Team leadership and coaching
- Process thinking
- Communication clarity
Step 2: Write questions that target those competencies
Each competency gets 1-2 questions. Use behavioral questions ("Tell me about a time when...") or situational questions ("How would you handle..."). Every candidate gets the same questions.
Step 3: Build a scoring rubric
For each question, define what a 1, 3, and 5 look like. Be specific. "Good answer" is not a rubric. "Described a specific conflict, explained their approach, and articulated the outcome with measurable results" is a rubric.
Step 4: Train your interviewers
This is the step most teams skip, and it's the most important one. Interviewers need to understand: ask the questions as written, let the candidate talk, score against the rubric (not against other candidates), and take notes on evidence (not impressions).
Step 5: Use the data
After interviews, don't start with "What did you think?" Start with scores. Compare candidates numerically first, then discuss. This prevents the loudest voice in the room from anchoring the decision.
Structured interviews at scale: where it gets hard
The framework above works well when you're hiring for 5-10 roles and your interviewers are trained and motivated. It gets significantly harder when:
- You're screening hundreds of applicants — you can't have a trained interviewer spend 30 minutes on each one and still hit your hiring timeline.
- Your interviewers aren't consistent — even with rubrics, different interviewers score differently. Studies show inter-rater reliability improves with structure but never reaches perfect agreement.
- You're hiring across many roles simultaneously — maintaining separate question sets and rubrics for 20 open positions is a real operational burden.
- First-round screens are the bottleneck — the biggest volume of interviews happens at the top of the funnel, exactly where most teams have the least structure.
This is the gap that AI interview platforms are designed to fill. An AI agent asks every candidate the same questions, evaluates answers against the same rubric, and scores consistently whether it's the first interview of the day or the three-hundredth. It doesn't get tired, doesn't have a bad morning, and doesn't have an unconscious preference for candidates who went to the same school.
The result is structured interviewing applied uniformly at any volume — something that's nearly impossible to achieve with human interviewers alone at scale.
Common objections (and honest answers)
"Structured interviews are boring for candidates." They can be, if done poorly. But the alternative — a rambling conversation where the interviewer talks about themselves for 20 minutes — isn't exactly a great candidate experience either. The best structured interviews feel like focused, purposeful conversations. Candidates often prefer them because they know exactly what's being evaluated and feel the process is fair.
"We'll miss the candidate's personality." Personality absolutely matters for many roles. The solution isn't to abandon structure — it's to include behavioral questions that surface personality traits relevant to the job. "Tell me about a time you disagreed with your team's direction" reveals far more about someone's personality than 10 minutes of small talk.
"Our hiring managers will never go for it." Start small. Run a structured interview alongside the unstructured one for a few roles. Compare the outcomes after 6 months — which hires are performing better, which ones churned? Data tends to change minds faster than arguments.
"It takes too long to set up." It does take effort upfront — maybe 2-3 hours per role to define competencies, write questions, and build a rubric. But consider the alternative: a bad hire costs 30-50% of their annual salary (recruiting costs, onboarding, ramp time, productivity loss, and re-hiring). A few hours of preparation to avoid even one bad hire pays for itself many times over.
The bottom line
The evidence is not ambiguous. Structured interviews predict job performance roughly twice as well as unstructured ones. They reduce bias, improve consistency, make candidate comparison possible, and hold up better under legal scrutiny.
The cost of structure is real — it requires preparation, discipline, and buy-in from interviewers. But the cost of not having structure is higher: bad hires, wasted interview hours, and a screening process that's only marginally better than random selection.
Whether you implement structure through training and templates, through technology, or through a combination of both — the first step is the same: stop winging it.
Want to bring structured, rubric-based interviews to every candidate automatically? See how Zivaro works — consistent AI interviews scored against your criteria, at any volume.