I find that providing context to all my stakeholders with a relatable example can be really helpful for them to understand better how I can be a good partner to them. In this article I wanted to share a notion page that I wrote for my people team. I walk through an example to explain what I do and how it is helpful for my stakeholders.
The project I use as an example are our recurring performance reviews, these cycles are known to all people people as everyone is involved (or affected) by the reviews. The insights/work I delivered to- & with the stakeholders was presented in one of our monthly meetings to show how much time we saved the people partners then and for future recurring work.
One of our people partners checked back on the actual time saved and she found that 38 hours of her performance review time in the cycle now only took 2 hours. I actually love that we can put a number on this if we just think about the fact that every 3-4 months my automation work saves her (and the other people partners) almost a full work week!
What is a (People) Data Analyst to the business?
I, as a Data Analyst, am essentially a Business Analyst that can code.
Broadly, my role is to enable business growth in my subject area by analysing business behaviour. I put systems in place that can measure and assess business growth. I use these systems to find patterns, expose weaknesses, identify drivers, and make recommendations for improvements.
My role is also to drive business growth by enabling other people to support their decision-making with facts. Specifically, I enable stakeholders’ data-driven decision making by building reusable dashboards and providing data-based insights/answers to business questions.
What does this look like for a People team?
A (simplified) example: Performance reviews. One of my first projects at Cleo was to help with the upcoming performance review cycle.
What was the task?
The issue presented to me by [people partners’ name] was that People Partners spend a lot of time preparing for performance reviews which would inflate People Partners’ workload to an unmanageable degree considering our hiring pace. Additionally, it prevented People Partners from focusing on understanding the ‘why’ of performance behaviours.
Implicitly, a majority of Performance Review Cycle tasks increase time-debt and could benefit from being automated. We therefore needed to identify tasks that are repeated with every performance cycle, that increase as headcount increases, and that can be automated.
We identified the following:
- Manually identifying employees that need to be discussed during calibration sessions with SLT members
- Manually identifying notable variances in ratings amongst the managers
- Throughout and after the performance cycle: identifying and analysing variances in ratings amongst different demographic groups, and between different business functions and organisations.
How was I, as the Data Analyst, able to help with these issues?
- Sourcing business context from People Partners and Strat&Ops Managers, I built an algorithm that automatically identifies employees with unusual performance behaviour or notable business behaviour
- The output was made available as a table in our BI tool (mode) that refreshes regularly, i.e. it is automated and will continue to run indefinitely
- I built an algorithm that creates a distribution of manager rating behaviours and identifies managers that are outside of the ‘normal’ distribution
- This output is available via below-mentioned dashboard
- I built a reusable dashboard that allows People Partners to meet with SLT members to discuss performance during calibration & as a post-mortem. I sought to encourage conversation that address:
- Function/SLT-org performance compared against the whole business’ performance to see if there are any unusual patterns - if yes, to discuss whether we expect these patterns or need to change something about the performance process or underlying drivers for employee performance within the SLT-org /function
- Function/SLT-org performance compared across different demographics that are often affected by evaluation biases to encourage a discussion around biases, and focus conversation around how to consider these factors in future cycles/performance assessments
- Manager rating behaviours across the business to encourage conversation about varying rating behaviours across/within the functions so that managers can uplift each other and discuss their methods with each other
Why was this helpful and what does iteration look like?
I build all these outputs so they update automatically every performance cycle. This allows a standardised, reliable assessment process that we can reuse every cycle and build upon. It enables:
- People Partners to invest time in thinking about ‘next steps’ and the ‘why’.
- (Random) example of a why & how:
- Why are we seeing a higher average rating for Women in Engineering, compared to men? Why is this reversed when looking at the whole company’s performance?
- If the reason is not ‘because women in engineering genuinely perform better than men’; what does this mean for us as a company, and how can we make the performance review process more equitable?
- (Random) example of a why & how:
- We can iterate through this process and build on top of what we already have, some iterations include:
- Changes to the flagging algorithm: Since we can now focus on discussing the highlighted employees, we can review if specific behaviours may not be of concern to us, or we might want to add a behaviour to the flagging list.
- Adding missing insights into the dashboard that we discovered as part of calibration/post-mortem discussions
- E.g. One addition I made after the last cycle was to add ‘over time’ distributions so we can view the company / slt-org / function distributions across the last 4 cycles as this was requested a few times. This now allows us to see if we are sustaining our employee quality over time or if we should make changes to any performance rating drivers.
- Regarding performance goals: People Partners now have more time to investigate if we should set ‘performance goals’ or grade on a curve based on what we see in our flaggings / manager distributions / demographic distributions.
What are my limitations, specifically as a People Data Analyst?
- Regarding data:
- I can only expose data, I cannot create data
- e.g. in performance reviews there are a lot of missing grades at pre-calibration as not all managers input their data on time. If you wish to see more accurate calculations of averages (and employee flags), this needs to be discussed with the managers that input the grades.
- Most people data is not ‘statistically significant’ as we only have small sets of distributions (e.g. # people within demographics or functions). We always need to consider that our insights are deeply contextual, and it is important that we use our ‘business knowledge’ and experience to supplement the data/behaviours we observe.
- e.g. in the example for the performance reviews, some functions are very small, so their performance distribution might look different to the company average. The idea is that you discuss this with the function heads, and if they are confident that everything is okay in their function and they just happen to have a differently-looking distribution for XYZ reason then that is absolutely fine.
- I can only expose data, I cannot create data
- Regarding what I can and cannot build:
- I am a data analyst, which means I can write code to put different tables together and layer ‘logical knowledge’ on top of it to analyse it, and I can then expose these outputs via our BI tools (e.g. mode) as tables or graphs.
- Whilst I cannot sit in every session and discuss insights, I can (and should - please pull me up if I don’t do this) walk stakeholders through how to interpret the graphs / data that we expose via the output tables / dashboards. I have data literacy and would love to pass it on.
- The experience you have as a people partner is what should enable you to have an instinct about why something happens and how we can fix it - I can then try to find data (if we have it) that can support your claim or your idea, or help you with research into how we can fix something. Over time I grow as a People Analyst and can also provide my business expertise, but at the moment I rely on your expertise and can provide a critical eye and feedback-questions to explore thoughts and ideas.
- I cannot build live-input tools (e.g. something that lets you drag-and-drop people into a performance-grid and show you the output in real time) as I am not an engineer.
- I cannot influence the types of BI tools that we use, so whatever I create I can only expose in our current BI tool (mode) or as a pdf-/word-doc analysis. Rarely, and only if previously agreed on, I provide insights in a spreadsheet. Notably, the moment I share insights via a spreadsheet with the stakeholder (you) the data becomes ‘static’, which means I pass ownership of the spreadsheet on to you. I generally do not ‘produce’ spreadsheets or am responsible for the way they look.