3 hours ago
Fri Apr 10, 2026 6:47pm PST
Ask HN: Hiring in the age of AI-assisted coding: what works?
I saw the HackerRank (YC S11) hiring post (https://news.ycombinator.com/item?id=47667011) and it made me realize I no longer understand how to evaluate candidates effectively.

Specifically, we are changing hiring across 3 dimensions: > Tasks: Real-world tasks on code repositories vs standard algorithmic-style puzzles > Evaluation: AI fluency, orchestration skills vs functional correctness > Candidate experience: Agentic IDE vs a simple code editor

In the “old world,” you could ask multiple questions and triangulate skill from answers. Now it seems like evaluation depends heavily on tools and models that keep changing month to month.

So I’m curious: > What signals actually correlate with strong engineers today? > How do you design interviews that don’t become obsolete with the next model release? > Are algorithmic interviews still useful at all?

Would love to hear from people who have recently changed their hiring process or have been interviewed using this new approach.

comments:
add comment
loading comments...