Posts Tagged ‘Incompetent Systems’

h1

Incompetent Systems

December 7, 2009

A few years ago I worked on an excellent research project called AstroGrid.  Nearly twenty commercial software engineers were to build a distributed astronomy data analysis toolkit to support astronomers, as part of an international global effort.

Some astronomers said they had seen it all before and remained skeptical, though their enthusiasm to help was not apparently diminished. I poo-poo’d them; I’d come from developing satellite control center software. I knew how to deliver software that worked.

We had a bunch of bright engineers, who worked hard and produced some pretty good stuff – sometimes very good stuff.  In my case, obviously, it was amazing stuff.

And after two years of quarterly iterative releases we’d still delivered no usable product.  There were a few applications deployed and used here and there, and some proper new science forced through as example test cases, but nothing that astronomers couldn’t have knocked up themselves in a few days of scripting. Sometimes they already had.

Lessons Learned

Forty man-years largely wasted and the project continued in the same vein – what was wrong?

There are lots of technical reasons: poor and shifting requirements, contradictory overall objectives, very little actual commercial experience in the team, unsuitable release procedures and version control, immature support tools, and so on.

But really these are all messing about in the weeds, looking for specific problems and specific someones to blame. Why did these technical problems exist, and why weren’t they resolved?  How did they persist – for so long?  More importantly, given there’s nothing new about these problems, why did they exist in this particular project and its organisation in the first place?

Importantly, I can’t think of any of the staff who were incompetent, and that includes the project manager and chief tech, which are roles that are sometimes (and sometimes should be) held responsible for project process. In this case though, while I disagreed with some of the activities (particularly the release process)  they were hard working and experienced, and yet still we produced nothing. For years.

Competent People working in Incompetent Systems

Imagine a new coal mine owner, who pops down the local and employs a bunch of brawny lads to mine the coal. He pays them for the amount of coal they hack off the coal face, leaves a foreman in charge to handle pick axe handle repair and pay and so on, and settles down in the now peaceful pub for a quiet pint, and waits for the money to roll in.

Within a few days the miners have pushed a little coal out of the mine to make room to get to the coal face and swing a pick, but little else.  Even if we assume they collaborate with each other (people are sociable and to some extent self-organise) to avoid bringing the roof down, there’s no incentive to get coal out and sold, just to get it far enough out to make room to hack more off the wall.  The targets, the incentives, the organisation, are all useless for the owner or any of the cold shivering pensioners waiting for the three lumps they can afford.

More importantly (because we often do things wrong, it’s nothing to be overly ashamed about), there is no local remit to make the changes required to make it useful. The foreman does not have the budget, the incentive or the executive power to change the organisation or targets to make the mine productive.

Someone somewhere can generally be found to make suitable changes. In this case, someone could pop down the pub and find the owner and tell him what’s up. But why bother? Time away from the coal face is time not earning. And who knows what the owner is like, maybe you’ll get fired for disturbing him. Again, there are barriers to improvement.

It’s an incompetent system, staffed by competent people.

Governance

What a lovely jargon word. But quite appropriate: who really is supposed to look after projects to make sure they get the support and training and the right staff at the right point, the right checkpoints and feedbacks and incentives?

In the commercial world near the marketplace these are normally fairly straightforward: money is the incentive. In order to make money, you have to provide someone with something they are willing to pay for. The focus is on that delivery of usefulness. So there are ‘automatic’ readjustments that come from that focus; if the miners were being paid for coal sold rather than hacked off a coalface, they would likely organise themselves into some sort of suitable structure and work process to deliver that coal.

As we step further away from the marketplace – to R&D projects in the commercial world, or academic research – getting these incentives right is trickier. Long term benefits of blue sky research are not only hard to define, but if you don’t have a good handle on some kind of target then you can’t differentiate between lack of delivery because we haven’t worked enough yet, and lack of delivery because the system is squashing any progress.

The Tools

Here I assume that pick axes (or more advanced machinery) are available. That people share a language (which isn’t always the case) and so self-organising is feasible. That currencies are in use, and so on.

When the tools are not avalable, the system isn’t the failure point. For example, evidence-based medicine has spread widely only relatively recently; it was hard to build systems for Good Medical Treatments without it.

Competent Systems

Competent systems don’t always succeed. Competent systems encourage success. Importantly, they have the feedback mechanisms that drive change from failures (and sometimes success) in order to direct effort to more success, rather than let that effort dissipate uselessly, or even have it directed unintentionally harmfully, as Incompetent Systems do.