
An interview with Athena Tavoulari on the moments where human stewardship matters most in hiring
AI is reshaping HR in many ways. In this conversation, we zoom in on the hiring decision process in Talent Acquisition: what evidence gets used and how decisions are made and defended as AI becomes part of the workflow. Steps that used to take weeks, like market mapping, sourcing, screening, and early shortlisting, can now move in days.
Athena Tavoulari is one of our Executive Search Partners at KennedyFitch. She speaks with TA leaders, CHROs, and business stakeholders regularly about what is changing in hiring decisions. So we asked Athena: In a more AI-enabled hiring process, where does human stewardship matter most?
What is the most important shift AI is creating in how hiring decisions get made in Talent Acquisition?
Speed isn’t the interesting part anymore. Everyone sees that. The interesting shift is what that speed changes about how we make decisions.
What’s happening now is that AI is showing up in two places at once. In the day-to-day work, GenAI helps with first drafts, synthesis, and sharper briefs, so we spend less time on the admin and coordination that quietly eats the week. But AI is also increasingly embedded in ATS and talent platforms, shaping how information is structured, highlighted, prioritized, and summarized.
The opportunity is that TA can take that time back from process and reinvest it in the human work that actually improves hiring: clearer role thinking, better conversations, stronger judgment, and a better candidate experience. The faster the system moves, the more value there is in being deliberate about the human moments.
Where exactly is AI showing up in the hiring process today?
I see it as three layers.The first is at the point of role definition, before sourcing even begins. It can support role profiling, intake preparation, market intelligence, and creating the job description. It sounds basic, but it’s where the strongest hiring decisions are set up, because clarity on the role, the outcomes, and what “good” looks like makes everything that follows more consistent and more fair.
The second layer is workflow acceleration. This is where AI is already saving TA teams real time: drafting and adapting postings, improving candidate communication, scheduling, note-taking, creating consistent candidate summaries, and helping hiring teams compare information. Used well, it reduces friction and gives recruiters and hiring managers more capacity for the human work that actually differentiates decisions.
The third layer is decision influence, and this is where I am most cautious. The moment AI is ranking, screening, or shaping who gets seen and who moves forward, the standards have to rise. You need clarity on what criteria are being applied, how you validate fairness and relevance, and who owns the call. In my view, AI should strengthen the quality of the decision, not quietly replace it. The best teams use AI to remove noise and improve consistency, while keeping judgement, calibration, and accountability firmly human.


When AI is increasingly part of the workflow, what does good judgment look like, especially in how candidates are surfaced, prioritized, and summarized?
Good judgment here means treating AI outputs as decision inputs. When AI is involved in screening, ranking, or summarising, it can quietly influence who moves forward and how people interpret what they are seeing.
That influence doesn’t always show up in a single decision. It shows up in patterns over time. For example, you can start to see the same profiles repeatedly lifted to the top because they come from a familiar set of schools or employers, or because their application matches a particular style. It can also show up when people follow the AI’s direction more than they realize, so having “a human in the loop” doesn’t automatically neutralize bias.
The teams that navigate this best keep decisions anchored in a shared definition of what “good” looks like for the role. That way AI outputs stay in proportion: a useful input, not a substitute for judgment, and the conversation stays grounded in evidence. And this is where judgment and oversight have to travel together. Good judgment sets the criteria; data shows you the consequences. Over time, the organisations that feel most confident with AI in hiring are the ones that keep learning.
We hear a lot about “perfect CVs” and AI-polished applications. What should teams do differently?
The “perfect CV” effect is real. I hear it often: everything aligns with the job description, the language is polished, and the summary reads like it was written for that role. Then interviews start, and sometimes there’s a gap between what’s written and what someone can actually explain, reason through, or ground in real experience.
At the same time, we shouldn’t overreact. A CV isn’t only wording. Career steps, scope, context, outcomes, and the specifics behind achievements still matter and it’s hard to sustain an invented track record through multiple steps without it showing somewhere. Strong TA has always been about looking past the surface and validating what’s real. What does shift is what you treat as signal. Writing quality alone becomes less differentiating, so the emphasis moves to specificity, consistency, and examples that hold up when you ask follow-up questions The practical question I come back to is: what is your process rewarding right now? If it rewards grounded examples and consistent reasoning, you will surface capability more reliably.
How transparent should we be with talent about where AI is used in the process?
Candidates don’t need a technical explanation, but they do need clear notice when AI could affect whether they move forward, especially if it’s used to screen, rank, or otherwise meaningfully evaluate them. Practically, I would keep it simple: if AI can influence progression, say so early, explain what it’s doing in plain language, and be clear where a person remains accountable. That’s also where expectations are already heading in law and practice. Rules like the GDPR and the EU AI Act, New York City’s AEDT framework, and Illinois’ AI video interview law all point the same way: if AI can influence hiring decisions, candidates/talent should be told.
What do you think TA leaders should always have in place during the hiring process – especially now AI gets more embedded?
For me, it comes down to three: fair, safe, and human.
Fair means being clear on what the process is optimising for, and checking whether outcomes match that intent. Bias is not something you outsource to a tool. It is shaped by inputs, constraints, and the decisions humans make around it. So “fair” becomes practical: structured criteria, calibration, and the ability to audit outcomes. Safe includes governance, but I also think it includes confidentiality. Treating data governance as professional integrity becomes essential. This also includes being intentional about where candidate data goes, which tools can be used, and what gets stored in the ATS. Human is about trust and what you can actually learn about someone. Candidates share differently when they feel evaluated by something they cannot understand. Human means clarity, care, and consistency at the moments that shape trust. Two-way communication still matters, especially when AI is involved: candidates react better when they feel informed and get a real opportunity to show capability.
In practice, this is less about principles and more about clear rules and ownership: decision rights, guardrails, what happens when something feels off, and who reviews outcomes. The strongest set-ups are cross-functional by design, but that is a whole discussion on its own.
“The ‘perfect resume’ effect is real, but a CV isn’t only wording. Career steps, scope, context, outcomes, and the specifics behind achievements still matter, and it’s hard to sustain an invented track record through multiple steps without it showing somewhere.”

What is the biggest shift you expect in how organisations think about talent flows
The biggest shift is that internal mobility and external hiring can start to operate as one system, with skills as the shared language. Instead of treating “internal” and “external” as two separate funnels, you can look at the same demand signal what skills the work needs and see what you can redeploy, what you can develop, and what you genuinely need to hire. It also widens the lens from “who do we employ?” to “what capability can we mobilise to get the work done?”
AI is what makes that practical. It helps build a more dynamic skills picture from the information organisations already have, and it powers talent marketplaces that match people to opportunities based on skills. Over time, it becomes a learning system. Each move and hire creates feedback about what success actually requires, which skills mattered most, and where your definitions need sharpening. The unlock is governance: one set of capability definitions, one decision logic, and one measurement loop, so internal moves and external hires are judged by the same evidence, not different standards.
Looking ahead to 2026, what is the biggest opportunity for TA leaders as AI becomes more embedded in hiring?
The opportunity for TA will come from being clear about what your hiring model is trying to achieve, because AI will scale whatever sits underneath it. If your decisions are well-designed, it will amplify consistency and quality. That starts with keeping people in the loop where it matters: clarity on what success in the role looks like, stronger separation between real signal and polished narrative, and judgment tested in real conversations.
It also means closing the gap between TA and data. Partnership with People Analytics helps connect selection signals to outcomes, validate what actually predicts success, and keep learning as roles and markets change. And then there’s accountability. Make that operational: set clear human oversight, monitor outcomes for bias or drift, and keep a simple audit trail of AI input versus human decisions. And put basic guardrails on candidate data use, access, and retention, so trust and compliance don’t become the weak link.
I’m optimistic. Used well, AI can help hiring become clearer and more human, less noise, better conversations, and decisions leaders can stand behind.
For organisations seeking to strengthen their leadership teams or navigate complex talent challenges, Athena is always open to a conversation about how the right executive search and leadership alignment can drive clarity, strategic traction, and lasting impact.
Get in touch with Athena Tavoulari




