Feel the churn

While enjoying my coffee this morning I found myself pondering the different methods of measuring progress on a project that I’ve seen over the years. I’ve found many objective measures that help, and I’m sure many people agree. But what about the more nebulous metric-type? Those metrics that are clothed in scientific method but probably shouldn’t be – like a metric for an individual’s performance. Keeping to the realm of my experience, software development, I wondered how other people feel about certain metrics and their potential consequences.

“If you tell a room full of very bright people that you are tracking a metric, you will get metric optimisation, not a great product.” – SuadeLabs.org

For example; I once worked (albeit a long time ago) for a company that had no QA at all, so when I joined it was just 1 QA and about 20 developers. The developers seemed to be under the impression that my job was to spy on them so they weren’t very receptive; as for the managers, they didn’t really know what they wanted from a QA except the obvious – less defects. The manager hatched a plan to reward the devs financially for every defect THEY found… you can guess what that metric led to. I was inexperienced and naive so I suggested it might be better to reward me for finding defects instead. That felt more logical at the time (well, I wasn’t paid much back then) – but now I see it’s no better. Why? Because like the devs I would have switched my attention to new and clever ways to ‘detect’ defects. You would think that’s great no? Well, not really. It made the already bad ‘QA vs Dev’ situation worse, and surely our time would have been better spent working together as a team, you know – collaborate – to ensure defects didn’t happen in the first place? Once I eventually won the respect of the developers, we were able to dispense with the urge to clutch at silly ideas and instead concentrate on baking quality into the product.

So what of other metrics? E.g. developer productivity. I would hope most people aren’t subjected to this one anymore, but you never know. This one follows the erroneous assumption that a coder who churns out more lines of code than their colleagues is more productive. Really? What if a coder solves a problem with a few lines of code instead of hundreds, surely they are more productive? Isn’t the whole point of being a coder to solve problems using software, rather than to be a code churning machine? Developers are problem solvers who happen to use software to do that, churning out code is not the point. You have probably seen what happens when this metric is applied – yep – loads of unnecessary code that breaks every aspect of SOLID engineering.

Perhaps the bonus schemes based upon metrics of individual performance are of more help? They might work alright in a sales environment, but I’m sticking to software development, and the assessments are much more abstract and wishy-washy here. It basically comes down to a subjective determination from your manager. And what about the notion of bonuses? Well, aren’t we then compelled to examine how those flick a switch in the individuals brain, specifically the reward centre, making them focus on personal gain rather than team work? I’m sure I’m not alone in witnessing how this kind of thing can turn normally rational and collaborative people into blinkered point scorers, using their mental resources to grab any advantage over their perceived competition at work rather than working with their colleagues toward a common goal. I say ‘perceived’ because that competition wouldn’t exist if the organisation prized collaboration over measuring individuals on a spreadsheet.

Ok, so what about ostensibly well-intentioned metrics? Ah, you know what they say? “The road to Hell is paved with good intentions” – anon.

Which brings us to….

….I’ve been a Team Lead and thoroughly enjoyed working with the team to find out what drives them, what motivates them. In 1-2-1’s I would focus on what they felt they needed or wanted to progress; not just in the current setting but generally in their career. The enjoyment however, was snuffed out by the organisational use of the Bell Curve. Quite apart from the controversial aspects of that concept following a certain publication in the mid-90’s that I won’t get into here, it wasn’t intended as a performance management tool; it’s original intent was to show the spread of values of anything affected by the cumulative effects of randomness, an idea attributed to 19th-Century German mathematician Karl Friedrich Gaussas, but perhaps discovered earlier by a French maths teacher named Abraham de Moivre who was trying to calculate the frequency that a heads or a tails appears from lots of coin tosses. It’s called the ‘Normal Distribution’ by mathematicians.

First of all, it’s use as a means to label employees performance is flawed ‘from the get-go’ as the Americans might say. Measuring Normal Distribution is an objective exercise, but assessing employees is highly subjective. Due to the nature of the Bell Curve, if you try to use it to measure employee performance you are forced to artificially ensure there are outliers at either end of the spectrum – the low performers, and the high performers. Why do I suggest it is an artificial placement? Because as a Team Lead I would rank my team members honestly based on their performance, but it’s still subjective, and then be told by management that I had to degrade a person’s score simply because the organisation already had enough people in that segment of the spectrum. Et voila – you now have fake scores. This is apparently very common. Now just stop and think for a second what that will do to the person being artificially labelled either ‘average’ or ‘low performer’ when you and they both feel it’s not the real case. The way the Normal Distribution model works dictates that before you even start assessing employees, (a misuse of it in my opinion), you know that there has to be a ‘bad performer’, an ‘average performer’, and an ‘excellent performer’, actually REGARDLESS of their actual performance. I’ve seen people leave companies because of this – hence the title of this article.

Another common and really quite immoral aspect of this kind of measurement, is that the company includes the general with the specific in order to avoid paying their bonuses – a reward they themselves dreamt up, and I’ve already taken apart above. What you’ll see quite commonly, is the scenario where an employee is told that yes, they have met every criteria for that promotion or that bonus, but unfortunately the company as a whole has under-performed so no promotion or bonus – sorry. Add to that the inevitable stagnation of those lumped into the ‘average performer’ segment of the curve, who end up demotivated. Again, all you are doing is setting up employee churn, and all because the company chose to use a statistical tool in a way it was never intended to be used.

I personally think organisations should follow Microsoft’s lead, and dump ranking systems like this, to focus more on teamwork and employee growth. Afterall, if you feel you’re hiring the best, why do you feel the need to keep putting individuals under the microscope? Can’t you just measure customer satisfaction, feature delivery, or market share? There are no doubt even better suggestions out there than this, but surely we can at least do away with some of the flawed metrics I’ve mentioned in this article.

If employees feel engaged, valued, respected and included, surely the need to pit them against each other vanishes?

Such was the pattern of thought over that coffee this morning… maybe I should switch brands?

© Copyright 2022 Cognito Square Ltd

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s