Your people completed AI training. Can they actually use AI?
Genplify is AI proficiency measurement for the enterprise — see what your workforce can actually do with AI, not what courses they’ve completed.
90% of enterprises face critical AI skills shortages. The cost: $5.5 trillion in delayed products, missed revenue, and impaired competitiveness (IDC, 2026). Genplify closes the gap between AI adoption and AI proficiency.
Illustrative example — not a real employee
THE MEASUREMENT GAP
The AI training gap that nobody is measuring
Your organisation has invested in AI training. Courses have been completed. Certifications have been issued. But here is what the data shows: completion is not competence.
Most enterprises cannot answer a basic question — can our people actually apply AI to their work?
The BCG–Harvard study found AI-proficient professionals produce 40% higher-quality work — but suffer a 19-percentage-point drop when they use AI on tasks outside its capability frontier. The difference is not training. It is judgment. And judgment is what completion certificates do not measure.
THE ASSESSMENT
This is not a quiz
Six psychometric formats — scenario simulation, prompt crafting, error identification, multi-turn dialogue, ranking, and case generation. Each captures a different facet of AI proficiency that cannot be measured by watching a video and clicking ‘complete.’
30 items. 30 minutes. Every employee receives a unique, dynamically assembled form from a 100-item pool. No two people receive the same assessment.
Six formats · Five dimensions · IRT-scored · 30 minutes
THE CURRICULUM
This is not a training video
29 lessons of essay-style content at Economist reading level. Written to teach professionals, not to test students. Calibrated hands-on exercises throughout.
Each lesson includes hands-on practice — like flagging errors in AI-generated content where false positives cost you points. The same calibrated judgment the assessment measures.
29 lessons · 10 min each · 5 acts · 3 companion exercises
THE RESULTS
This is what your board sees
One number. One percentile. One heatmap. One click to export a board-ready summary.
Illustrative data — names and scores are fictional. Employee and manager dashboards available in the product — schedule a consultation to see the full experience.
OUR SCIENCE
Built on assessment science, not course completion
Genplify uses the same psychometric framework behind the GMAT, GRE, and TOEFL — Item Response Theory — adapted for measuring AI proficiency across five dimensions and six measurement formats.
Our methodology has been reviewed by leading I/O psychology researchers whose work on situational judgment tests and assessment design has been cited over 35,000 times in the academic literature. The reviewer's identity is available on request.
Read the full methodology →Item Response Theory estimates proficiency by accounting for item-specific difficulty. More precise than percentage-correct. Trusted by millions of test-takers worldwide.
Every score includes a range band. We never show a number without telling you how precise it is. When two scores overlap, we say so.
Designed for pre and post measurement on the same psychometric scale. Growth reported only when it exceeds the minimum detectable difference.
WHAT CHANGES
When your board asks ‘are we AI-ready?’ — you have an answer.
A defensible answer for the board
When leadership asks "are we AI-ready?" — you show them a number, a percentile, and a heatmap. Not a completion certificate. Not a survey. A psychometric measurement with honest uncertainty built in.
Training budget redirected by data
See exactly which teams score Foundational on which dimensions. Redirect spend from generic programmes to the three specific skill gaps that matter most — before the next budget cycle.
Growth that means something
Pre and post linked through anchor items on the same scale. When Genplify reports improvement, it will have exceeded the minimum detectable difference. When it hasn’t, we tell you that too.
A defensible answer when the board asks about AI readiness. Proficiency data that shows exactly where to redirect training budget — with range bands that prevent over-interpreting small differences between teams.
Built for enterprise L&D leaders
HOW IT WORKS
Simple for everyone involved
No complex rollout. No months of configuration. You invite your people, they assess and develop, and the results flow to your dashboard automatically.
Employees receive a single link. No app download. No separate account. They begin when they’re ready, during work hours.
A 30-minute baseline assessment produces a personal proficiency profile. Then targeted lessons — 10 minutes each, self-paced — build the skills the assessment identified.
Dashboards populate as employees complete assessments. A post-programme assessment measures real change — reported only when it exceeds the measurement error.
PRICING
Transparent pricing. No surprises.
Per-employee pricing that scales with your organisation.
Full AI proficiency assessment plus 29-lesson curriculum.
- Pre and post assessments
- Five-dimension proficiency profiles
- 29-lesson self-paced curriculum
- Employee + manager dashboards
- Team benchmarking + range bands
SSO, API, HRIS, white-label, board-ready exports.
- Everything in Assess + Learn
- SSO / SAML integration (on request)
- HRIS integration
- White-label exports (PDF + PPT)
- Custom benchmarking cohorts
- Dedicated account team
The AI Proficiency Gap: What the BCG–Harvard study means for professional services
40% quality improvement on the right tasks. 19-point performance drop on the wrong ones. The difference is not training — it is judgment.
Read the research →Ready to measure what matters?
Measure what your workforce can actually do with AI — starting with a deployment tailored to your organisation.
No commitment required · Early adopter pricing available · Response within 48 hours