Every field has its favourite jargon and buzzwords – and People Analytics is no exception.

From “headcount envelopes” to “skills adjacency”, the language can feel like a puzzle if you’re not steeped in it every day. To make life easier (and meetings clearer), we’ve put together a plain-English guide.

Think of it as your decoder ring for the conference: no mystery acronyms, no baffling finance-speak, just simple explanations you can use right away.

A/B test  Compare two options by splitting users randomly.
A/B/C option framing  Presenting three realistic choices (A, B, C), each with capacity, cost range and time‑to‑coverage, so Finance can compare and choose.
Absence  Time not worked due to sickness, leave or other reasons.
Accruals  Costs recorded when incurred, not when paid.
Adoption  The extent to which teams are actually using the new process, model or plan.
Adoption telemetry  Data showing whether teams are actually using the new process or tool.
Adverse impact  Disproportionate negative effects on protected groups.
AI assurance  Evidence that an AI system is safe, fair and robust.
Algorithmic transparency  Explain how a model affects decisions.
API  A standard way for systems to talk to each other.
Artefacts  The simple documents you publish and maintain to run the process.
Assumptions register  A list of the key assumptions behind the plan, each with its value/range, source, owner and date.
Attrition  People leaving the organisation (voluntary or involuntary).
Attrition bias  People drop out over time and skew results.
Automation  Using technology to do work that people did before; sometimes an alternative to adding headcount.
Backlog  The ordered list of work to do.
Baseline  Your starting point for comparison.
Batch vs real‑time  Move data in chunks vs continuously.
Behavioural barrier  A habit or incentive that blocks adoption.
Benchmarking  Comparing your numbers to a peer group.
Blue‑green deployment  Run old and new versions side by side, then switch.
Build / buy / blend / borrow / automate  Five ways to get work done: train your own; hire; mix internal and external; use contractors/partners temporarily; use technology instead of adding people.
Build plan  Concrete steps and timelines to create skills or capacity internally.
Cadence  The regular rhythm for reviews. Example: a monthly check‑in.
Calibration  How well predicted probabilities match real outcomes.
Canary release  Roll out to a small group first to check for issues.
Capacity  The amount of work a team can deliver in a period (hours, cases, features, etc.).
Capacity maths  Simple calculations that show how many hours or units of work your team can deliver.
CDC (change data capture)  Streaming only what changed since last time.
Census survey  Full workforce survey.
CFO  The most senior finance leader in a company. Often signs off major spending.
CFO note (monthly)  A short update to Finance: what changed (movement), how the method is holding up, adoption, confidence and the next decision.
CFO‑ready artefacts  Documents laid out to answer Finance’s core questions fast.
Champions  Enthusiastic early users who help others adopt.
Change network  Named people in units who help land the change.
CHRO  Chief HR Officer, the most senior HR leader.
Class imbalance  When one outcome is rare (e.g., 2% attrition).
Closing panel  The final group discussion with several speakers and audience Q&A.
COE (centre of excellence)  A small expert team that sets standards and supports others.
Cohort chart  Tracks groups over time from a common start point.
Collective bargaining  Negotiation between employer and union on pay and conditions.
Communications plan  Who needs to hear what, when and how.
Compa‑ratio  Pay vs the midpoint of the pay band.
Confidence  How sure you are about a number or forecast; often shown by a range rather than a single point.
Confidence bands  Upper and lower bounds around a forecast to show uncertainty.
Confidence interval  A range that likely contains the true value.
Confounder  A hidden factor that distorts results.
Contingent workforce  Contractors, temps, and agency staff.
Cost range  A span instead of a single number (e.g., £450k-£520k) to reflect uncertainty.
Coverage  Whether enough people are on shift to meet demand.
Coverage & capacity test  Show how fast you can cover the work and how much work the team can deliver.
Critical path  The steps that control the earliest finish date.
Critical role  A role that strongly affects value or risk.
Cross‑sectional  One‑off snapshot at a point in time.
Cross‑validation  Test a model by training and testing on different splits.
Data lakehouse  A data store that supports both raw files and analytics tables.
Data lineage  Where data came from and how it was changed.
Data mesh  Domain teams own their data as products with standards.
Data quality dimensions  Accuracy, completeness, timeliness, consistency, uniqueness.
Data retention  How long you keep data and when you delete it.
Decision rights  Which roles are allowed to approve or reject a choice.
Decision sheet  One page showing options A/B/C with capacity, cost range, time‑to‑coverage, risks and a recommendation.
Decision test  Be explicit about the choice you want signed now (e.g., hire vs redeploy).
Deduplication  Merging duplicate records into one.
Definition of done  The checklist to call a story complete.
Demand model  Estimate of future work to be done (e.g., cases per week next quarter).
Difference‑in‑differences  Compare before/after across treated and control groups.
Differential privacy  Adding noise so individuals can’t be identified.
Domain data product  A well‑documented, trustworthy dataset owned by a domain team.
DPIA  Data protection impact assessment; checks privacy risks before launch.
DRI (directly responsible individual)  The named person who owns an outcome.
Drift (data/concept)  When data or relationships change over time.
Driver  An input that moves a result (e.g., hiring rate, attrition).
Driver analysis  Which factors most move an outcome.
Early attrition  Leaving within the first months (e.g., 90 days).
Effect size  How big the difference or relationship is, not just whether it exists.
Elasticity  The responsiveness of one variable to changes in another.
Embeddings  Numeric vectors that represent meaning of text for search and clustering.
Engagement  How committed and energetic people feel at work.
eNPS  Employee Net Promoter Score; likelihood to recommend the employer.
Entity extraction  Pulling out names, skills, locations from text.
Envelope (finance/headcount)  The cap Finance sets on spend or roles for a period.
Epic  A large piece of work broken into smaller stories.
Erlang C  A formula to staff call centres based on wait targets.
ETL/ELT  Move and transform data: Extract‑Transform‑Load or Extract‑Load‑Transform.
Event stream  Real‑time feed of changes (e.g., new hires, status updates).
Executive one‑pager  A single page with the headline, the number and the ask.
Explainability vs interpretability  Explaining a specific prediction vs understanding the overall model.
External labour supply  How many suitable people are available in the job market.
External market checks  Using outside data to test realism (e.g., supply of engineers, wage trends).
F1 score  The balance between precision and recall.
Feature  An input variable used by a model.
Feature store  A shared place to keep cleaned model inputs.
Few‑shot/zero‑shot  Getting an AI to perform a task with few or no examples.
Finance  The team that manages budgets, forecasts and company money.
Finance‑grade lenses  Ways of evaluating a plan that meet Finance’s standards for rigour and auditability.
Fine‑tuning  Training an AI on your examples to shift its behaviour.
Forecasting  Projecting future numbers using current data and assumptions.
Four‑fifths rule  A quick screen for adverse impact (rate should be at least 80% of the highest group).
FP&A  Financial Planning & Analysis; the finance team that builds budgets and forecasts and partners with the business.
FTE (Full‑Time Equivalent)  A way to add up different work patterns. 1.0 FTE = one full‑time person; 0.5 FTE = half‑time.
Fully loaded cost  Salary plus benefits, taxes, tools and overhead.
Funded decisions / funded choices  Options that have been approved and given budget to execute.
Funnel chart  Shows drop‑off across stages in a process.
Gen‑AI  Generative AI. Software that can read or generate text, code or images.
Gen‑AI (generative AI)  AI that can generate or summarise text, code or images.
Golden source  The single, agreed best source for a data element.
Governance  Who decides what, and how those decisions are checked.
Grounding  Forcing AI to use your trusted data as source material.
Hallucination  When an AI confidently makes something up.
Headcount envelope  The cap set by Finance for total people cost or number of roles.
Heatmap  A coloured grid to show intensity.
Hiring funnel  Stages from applicants to hires.
Holdout set  A final, untouched sample to test real performance.
HR-Finance operating handshake  A practical agreement: Finance sets the headcount envelope; HR/SWP bring options that fit the envelope and lift capability and productivity.
Hub  A hiring/operating location you cluster roles in (e.g., Dublin vs Kraków).
Hub‑and‑spoke  One expert hub supporting several local teams.
Human‑in‑the‑loop  Keep people in key steps to review and override.
ICE plots  Show feature effects for individual cases.
Identity resolution  Matching records that belong to the same person.
Impact (track)  Label for sessions that lead with business outcomes and value.
Inclusion index  A composite score of belonging and fairness.
Instruction tuning  Training an AI to follow instructions better.
Internal mobility  Moves within the company, lateral or upward.
Internal pipeline readiness  How many near‑ready candidates you already have inside the company and how fast they can be moved.
Interrogate (numbers)  Examine and question the numbers to test reliability.
JD parsing  Using software to read job descriptions and extract skills/requirements.
JDs (job descriptions)  The written description of a role’s responsibilities and requirements.
Job architecture  The structured map of roles, levels and bands.
Job family  A set of related roles with similar skills and progression.
K‑anonymity  Ensure each person looks like at least k‑1 others in a dataset.
KPI  A key performance indicator; a metric that matters most.
Label/target  The value a model tries to predict.
Lagging indicator  A metric that moves after outcomes (e.g., retention after onboarding).
Lawful basis  The legal reason to process data (e.g., consent, contract).
Layering  Number of management levels from top to front line.
Lead time  Time from request to delivery (e.g., 10 weeks to hire a data engineer).
Leading indicator  A metric that moves before outcomes (e.g., pipeline health before hires).
Leakage  Using information in training that wouldn’t be available in reality.
Legitimate interest  A lawful basis that requires safeguards and balancing tests.
Likert scale  1 to 5 agreement scale.
Longitudinal  Tracking the same people over time.
MAE/MAPE/RMSE  Common error measures for forecasts and regressions.
Market test  Prove the plan is realistic using labour‑market data and internal pipeline.
Market tightness  When supply of a skill is scarce relative to demand.
Market validation  Checking your plan against external data to confirm it’s realistic.
Master data  Core entities like people, roles, locations with unique IDs.
Method (for costs)  How you worked the number out: assumptions, rates, multipliers and what’s included/excluded.
Mini‑case  A short, anonymised example that demonstrates the method.
MLOps  The process to deploy and maintain models reliably.
Model registry  A catalogue of models with versions and approvals.
Model risk  The chance a model is wrong or misused.
Money test  Show costs as a range, how you calculated them, and where the money will come from.
Monitoring  Watching models for drift, errors and bias after launch.
Movement  What has changed since last month (e.g., roles filled, cost variance, skills added).
Multipliers  Factors applied to raw numbers to reflect reality (e.g., benefits on salary, utilisation).
Narrative spine  The simple story arc of problem, options, choice, result.
Net capacity  Capacity left after subtracting losses like attrition, holidays, training and ramp‑up.
Next bet  The next decision or investment based on current progress.
Non‑response bias  Missing views from those who didn’t respond.
Nudge  A small design choice that makes the right action easier.
Offer acceptance rate  Offers accepted divided by offers made.
OKR  Objectives and key results; goals plus measurable results.
One‑pager  A single‑page summary of the decision or plan.
On‑ramp  An easy starting point that leads into a bigger topic or debate.
Ontology/taxonomy mapping  Align different lists to a common standard.
Operating cadence  The regular rhythm of meetings and reviews (e.g., monthly).
Opex vs capex  Operating expenses vs capital investments.
Orchestration  Scheduling and monitoring data jobs (e.g., nightly pipelines).
Outcomes vs dashboards  Outcomes are business results (e.g., revenue saved). Dashboards are reports; they don’t prove value on their own.
Overfitting  A model learns noise and fails on new data.
Overtime  Hours beyond normal schedule, often at a higher rate.
Ownership  The person accountable for a decision or deliverable.
Ownership & cadence test  Name who is accountable and how often progress will be reviewed.
Pack  A small bundle of handouts or digital files for delegates.
Panel  A fixed group surveyed repeatedly.
Partial dependence (PDP)  Shows average effect of a feature on predictions.
Partner/contract  Use an external firm or contractor for a time‑boxed period.
Pattern, not brand  Explaining a repeatable method without naming a specific company or product.
Pattern‑based  Reusable approaches you can apply in other teams or companies.
Pay equity  Paying fairly across comparable roles and levels.
Pay progression  How pay moves over time within a band.
PII  Personal data that can identify someone.
Plenary  A session for the whole audience, not a breakout.
Post‑mortem  Honest review after delivery to learn and improve.
Power (sample size)  Chance of detecting a real effect given the sample.
Precision  Of the predicted positives, how many were correct.
Prediction interval  A range that likely contains the next observed value.
Pre‑mortem  Imagine failure in advance to find and fix risks.
Productivity  Output per unit of input (e.g., cases per agent per day).
Proficiency level  Depth of skill (e.g., beginner, working, expert).
Prompt  The instruction you give an AI model.
Propensity score matching  Pair similar subjects to reduce bias in comparisons.
Pseudonymisation  Replacing identifiers with codes while keeping links possible.
Pulse survey  Short, frequent survey.
p‑value  A measure used in statistical tests; lower means stronger evidence against chance.
Quality‑of‑hire  How well new hires perform versus expectations (e.g., ramp time, performance, retention).
Quarter  A three‑month business period (Q1, Q2, etc.).
Quasi‑experiment  An experiment‑like test without full randomisation.
Queueing  The maths of waiting lines and service times.
RACI  Role map: Responsible, Accountable, Consulted, Informed.
RAG (retrieval‑augmented generation)  Fetch documents first, then have AI answer using them.
RAID log  Risks, Assumptions, Issues, Dependencies in one place.
Ramp time / Ramp-up Time for a new hire to hit expected output.
RCT  Randomised controlled trial; the gold standard for experiments.
Reallocation  Moving budget from one area to another rather than asking for new money.
Recall  Of the actual positives, how many were found.
Redacted  Sensitive details removed or masked.
Redeployment  Moving current employees to different roles or teams instead of hiring new people.
Red‑team  Try to break a system to find risks before launch.
Reference data  Allowed lists like country codes or pay grades.
Reforecast  Update the forecast mid‑period based on new facts.
Regression to the mean  Extreme values tend to move closer to average next time.
Reskill  Train someone to move into a different role.
Response bias  Answers skewed by who chose to respond.
Right to work  Legal permission to work in a country.
ROC‑AUC  A score of classification performance across thresholds.
Role archetype  A standardised role pattern used across teams.
Roster optimisation  Arrange shifts to meet demand at least cost.
R‑squared  How much of the variation a model explains.
Run rate  Current monthly rate projected over a year.
Safety filter  Blocks harmful or sensitive outputs.
Sankey diagram  Shows flows between categories with bands.
Scarcity index  A signal of how hard it is to find a skill in the market.
Scenario  A plausible version of the future used to test a plan.
Scenario planning  Preparing multiple plausible futures (A/B/C) with triggers for action.
Scenario range  The low/medium/high outcomes considered.
Schedule adherence  How well people stick to planned shifts.
Scope (for costs)  What is in the estimate and what is not (e.g., includes salaries and tools, excludes relocation).
Seat  A funded role or position.
Seed the panel  Tee up targeted questions to spark the following discussion.
Sensitivity analysis  How much a result changes if an input shifts.
Sensitivity table  A small grid showing how outputs change with inputs.
SHAP values  A way to explain model predictions by feature contributions.
Shift bid  Workers choose preferred shifts, usually by seniority or points.
Shrinkage  Time lost to meetings, breaks, training and absence.
Sign‑off  Formal approval by the accountable person.
Simpson’s paradox  A trend that reverses when you combine groups.
Skill cluster  A group of related skills that travel together in roles.
Skill inference (with Gen‑AI)  Using Gen‑AI to read job descriptions and extract the underlying skills required.
Skill taxonomy  A structured list of skills that lets you group, compare and roll them up.
Skills adjacency  Skills close enough that people can learn them quickly.
Skills forecasting  Estimating which skills you will need, in what quantity, and when.
Skills ontology/graph  A map of how skills relate to each other.
Skills taxonomy  A structured list of skills with clear names and groups.
SLA/SLO  Promises on availability or freshness of a service or dataset.
Slide‑light  Very few slides; talk‑first.
Slopegraph  Shows change between two points across categories.
Small multiples  Many small charts with the same axes to compare groups.
Source of funds  Where the money will come from (reallocated from another budget or new spending).
Span of control  Number of direct reports per manager.
Sparkline  A tiny line chart without axes for quick trend scans.
Spine chart  Shows spread around a median across categories.
Sprint  A short, fixed work cycle (e.g., two weeks).
Story  A small, testable unit of work.
Supply model  Forecast of people available, including hiring, exits and internal moves.
SWP (Strategic Workforce Planning)  Planning the people and skills needed to deliver the business strategy.
Talent density  Proportion of high performers in a team.
Telemetry  Automated signals from systems that show usage or progress.
Template  A pre‑formatted document you can reuse quickly.
Throughput  The amount of work finished per period (cases per week, features per sprint).
Time‑to‑coverage  How long until the work is adequately staffed. Example: ‘6 weeks to reach 90% coverage’.
Time‑to‑fill  Days between job posting and offer acceptance.
Time‑to‑start  Days between offer acceptance and start date.
Tokens  Small chunks of text used for AI pricing and limits.
Trigger  A condition that tells you to switch plan (e.g., vacancy rate passes 8%).
TUPE  UK rules on staff transfers when services move supplier.
Unit economics  Profit or cost per unit (e.g., per hire, per ticket).
Uplift modelling  Predict who is more likely to change behaviour if treated.
Utilisation  Share of time spent on productive work (e.g., 75%).
Vacancy cost  Loss from an empty role (missed output or extra overtime).
Vacancy rate  Open roles as a percentage of total roles.
Variance  The gap between actual and plan.
Variance to budget  Over or under the plan.
Vector database  A store that finds nearest vectors (similar meaning text) fast.
Vendor‑neutral  Not promoting a specific product or supplier; methods work with any toolset.
Vignette  A brief, illustrative example or story.
Wage trends / wage signals  Whether pay for a role is rising or falling in the market.
Waterfall chart  Explains a total by showing adds and subtracts.
Workload  Amount of work expected in a period.
Works councils  Employee bodies with consultation rights in some countries.
Zero‑based budgeting  Build the budget from zero, not last year’s spend.