Science you can stake
talent decisions on.
Measuring people is hard. To make talent decisions you can trust, accuracy and reliability aren't optional. Skillvue is built on I/O psychology and psychometrics, ensuring every data point holds up to scrutiny.
Book a MeetingTwo disciplines, one standard of rigor
I-O Psychology
Defining what to measure and why it matters
I/O psychology grounds our platform in decades of research on human performance at work, ensuring we map the right skills, select the right constructs, and use the right mix of assessment types for each talent decision.
Psychometrics
Defining how to measure it right
Psychometrics governs the design of every assessment we build: which format, which scale, which scoring model. The goal is simple: maximize accuracy, minimize noise, and make sure results mean what they claim to mean.
The team behind the science

Dr. Tony Lee, Ph.D.
Head of AI & Science
Computational psychologist with double Ph.D. degrees and hands-on experience in Machine Learning and AI-based assessment. His interdisciplinary background brings a unique perspective to the assessment field, combining psychological knowledge with advanced AI and machine learning techniques. At Skillvue, he leads the AI & Science team, innovating, validating and implementing new competency assessment models built on the latest technologies.
LinkedIn
Jatin Babbar
Senior Machine Learning Engineer

Dr. Serena Dolfi, Ph.D.
People Scientist

Wamiq Raza
Machine Learning Engineer

Luca Sbrollini
People Scientist
External collaborators from academic, HR consulting and corporate world
A rigorous, end-to-end assessment lifecycle
Define constructs
Identify what to measure, grounded in I-O psychology research and the client's competency model.
Better evidence
AI unlocks richer, more direct evidence of skill through realistic scenarios, interactive tasks, and multiple response modalities that reflect how work is actually done.
Rigor at scale
We embed assessment science into the product so rigor scales with the system. Clear constructs, evidence-centered design, and governed scoring prevent AI from introducing noise.
Continuous evolution
Because skills and roles evolve quickly, measurement must evolve with them. Continuous monitoring and scientist-led iteration keep signals accurate and defensible.
Responsible AI built for high-stakes talent decisions
Transparent scoring
Every score comes with an explanation: what was measured, how it was scored, and what evidence supports it.
Human oversight
AI recommends; humans decide. HR teams can edit, override, and make the final call on every assessment.
Continuous monitoring
Drift checks, stability reviews, and scoring audits detect changes before they affect results.
Regulatory compliance
Built from the ground up for GDPR, EU AI Act, ISO 27001, and SOC 2. Auditable by design.