An Evidence Based Approach to Scoping Reviews

February 24, 2011

“My” second paper (An Evidence Based Approach to Scoping Reviews, published in the Electronic Journal for Information Systems Evaluation) grew from a requirement I made of a PhD student I was the industrial mentor for, when I still wore a commercial hat. Essentially I wanted a more structured approach to the apparently haphazard review of existing work, so that I (as commercial customer) could be confident about what concepts were already available.

When I left that company I lost track of the work, and only recently reconnected when I started at Cranfield. In the meantime it had lost direction, mostly because trying to inspect how you do something while you do it tends to get in the way of the work (not a new problem).

The paper as initially submitted to ECIME was made of three poorly connected pieces of work, and written by a foreign student whose English wasn’t all that good, much sympathy for him. I re-edited it, ran some exercises to fill in some examples, and added some pieces from my initial comments when the work first started, so I got to be the last author.

There’s a lot more that still needs to be done in this field. It’s on my list of things to do…


When are Systems Of Systems not Systems?

November 26, 2010

Only the most trivial of systems are not composed of other systems, yet the term ‘System of Systems’ is used as if describing something distinct. So what is it? What’s the difference between a ‘system of systems’ and a ‘system of … things that aren’t systems’?

Is it a bigger thing?

For example, this paper (Net-Centric, Enterprise-Wide System-of-Systems Engineering And The Global Information Grid PDF) argues that systems-of-systems are not just a scaling up of systems-of-components but are distinguishable as follows (click to enlarge):

Yet plainly many of these are simply differences of scale:

Local vs global is a simple geographic scaling, and is not really valid, depending on how you define ‘global’.

Similarly lifespan extents are in practice in the eye of the beholder. Complex systems of systems such as human beings have lifespans of decades, yet systems of humans such as enterprises have typically similar lifespans.

Similarly (not) understanding information flows is a feature of the engineer not the component; a transport company is a system of components that include vehicles. Vehicles are systems of components that include engine management systems, that in turn include microchip information exchanges that are often not very well understood at all when operating in the real world, and so on. Understanding of the information exchanges varies from engineer to engineer and community to community.

The required functions too change; even if a car has been ‘optimised’ for a certain set of requirements, the uses that the owner might want to put it to changes from journey to journey and during the lifetime of ownership as the owner’s lifestyle changes.

And so on.

Is it a new thing?

This paper (A New Accident Model for Engineering Safer Systems PDF), was included as a discussion paper at a Systems of Systems Architecture (Safety) group and claims that we are dealing with new and more complicated systems as technology enables more complex systems.

Yet biological systems are some of the most complex systems that we encounter, and the primitive farmer has had to run systems of these components as a matter of course. The horse pulling a plough, for example, has to be managed as a system and yet is an essential component of some subsistance agricultural farms.

Reverse the polarity…

A system of components is supposedly ‘well understood’ and so there is a top-down view of how the components interrelate and the components are seen as discrete black boxes. It is easy to diagram and describe.

With systems of systems these components are opened up and the interrelationships are less well understood; a kind of ‘inside out’ view, where we sit within this large surrounding system, looking around at a complexity we can’t comprehend.

These are descriptions of viewpoints and the engineers’ understanding though, not descriptions of the systems themselves. As long as the terms are used as a way to categorise viewpoints then this is alright, but unfortunately the terms seem to be used to describe a ‘new’ problem, and so therefore we need ‘new’ ways of approaching it, thus discarding much of what we have learned about systems engineering.

It’s a learning thing

It may be that this is simply part of the way that we preserve corporate or community knowledge. Because expertise is hard to pass on, there is a tendency as new blood arrives to generate ‘new paradigms’ that are a small iterative improvement (hopefully) on the previous paradigm. People are essentially re-learning many existing concepts under the guise of exciting shiny new terms that provide the motivation. More later…

It’s just that I don’t understand

As long as we don’t lose sight of the fact that systems of systems are ‘just’ systems, the term can be use to indicate the engineers’ perhaps quite legitimate incomprehension of the complexity of the system under discussion.


Expert Confidence and Probability

October 22, 2010

Tyre marks on A4042

In the photo above are two parallel marks that we associate with tyre material scraping off onto the tarmac when wheels ‘lock’ under braking.

The marks are about a hundred meters long and are reasonably even until the last few meters. An emergency brake from around 80mph would use that distance.

The vehicle was under steering control however; there is a small weave a few tens of meters after it starts, and the vehicle pulls into the parking bay that the photo was taken from. Wheels that are locked cannot be used to change direction, so these were not left by the front wheels.

It may not be clear from the image, but the tyre marks are continuous – there is no sign of the stuttering on/off/on/off that you get with Antilock Braking Systems.

The controlled steering, and reasonably even strength of the mark along most of the track, suggests tyres being dragged along under power until the vehicle could pull into a layby.

Commercial lorry trailers have emergency or parking brakes that are applied when the control coupling to the tractor fails. This stops trailers from running away if they come apart from the pulling tractor, which could be particularly invigorating for other road users on down-slopes.

The Conclusion

We can reasonably conclude that the brakeline (probably an air coupling) was not properly secured and came apart under vibration, the trailer tyres locked, the driver swerved slightly under the sudden pull, then turned into the layby to deal with the problem, and once it was dealt with drove off.

Now how do we rate confidence in that conclusion? What do I mean by “reasonably” when I say “we can reasonably conclude”? What do I mean by “probably” if I say “This is probably what happened”?

The Confidence

We may be tempted to use Frequency Probability terms: “I am 90% sure”. If we were analysing many of these marks across the country, and were able to use CCTV footage or track down witnesses to find out what actually happened, then such terms would be appropriate. But in this case we have no priors, only one situation to assess, and no clear view of the possible solution space. Framing confidence as a “there is a 6 out of 10 chance that this is true” is not helpful and probably (heh) misleading.

The narrative in the conclusion above describes why we came to that conclusion and provides the reader with their own confidence: you can see the reasoning and check from other sources and experts that the story is indeed plausible. Confidence in plausibility is a requirement for confidence in the conclusion, but it’s not sufficient.

We don’t – and possibly cannot – know of all the other possible solutions to the problem. We can sit around in a group and brainstorm some other options some of which will be more likely (a caravan rather than a lorry trailer) than other believable ones (a rear-steer car braking hard) or other strange ones (A practical joker with a pot of paint who goes around painting marks on roads). We are still left with the ‘unknown unknowns’ – the large space of possible, plausible and likely solutions that we haven’t thought of.

The Need

This is obviously a problem when trying to convey expert opinion to non experts, particularly decision makers. Decision makers have to weigh expert opinion from many different experts that often do not compare directly (hospitals vs education), and so of course want to know how sure each expert is.

[There is also the huuuuge subject of how framing information changes decisions]

The decision maker may also want to know something about the expertise of the expert: is the conclusion above from a deskbound road safety researcher, from a brake designer, a tyre designer, a traffic officer, a truck driver, a passing motor enthusiast, or a tree-hugging homeopath? Or someone with experience in all of these?

Not only is the ‘quantity’ of expertise of the expert important, but the quality of it. A narrow-minded expert may read the opinions of other experts incorrectly (do Antilock Braking Systems actually stutter?) or incompletely (if most do, are there some that do not?). An expert from a small community may also be ‘warped’ by prevailing attitudes in that community; biases in the expertise will frame the way the expert approaches the likely solutions.

My friend, ex-colleague and Very Difficult Person, John Salt, asserts that expertise is the internalising of knowledge in a particular domain, and that that very internalising makes it difficult to understand how well conclusions are formed, let alone describe them.

I am more optimistic, but I don’t understand why and certainly can’t explain it.


Life & Death Decisions using Sparse, Unreliable Evidence

August 23, 2010

Wahey, my first ‘paper’ (here, PDF) has been reviewed and accepted for the European Conference on Information Management and Exploitation in early September.  It looks rather lonely on its own on my shiny new publications list page, but the title makes up for it.

It became a bit of a brain dump for everything I could think of that was relevant, so it’s a broad sweeping outline with plenty of pieces to pick up and work on in more detail.

Many thanks to John Salt for helping to write it, my parents for their initial review, and friends especially Tom Targett and Eric Titley at the ROE for proof reading the later versions.


Laptop, Laptop, On The Wall

April 20, 2010

Three years ago I temporarily attached a laptop to the wall in my kitchen. Temporarily as it was about to be redecorated – which is only just now happening as I finish upgrading the house.

It provides web access, email conversations, instant messaging, skype and a whole range of media including music, iPlayer and the like, as well as locally stored films, audio books and documentaries to keep me occupied while cooking.  You can even – gasp – look up recipies.

All this makes it much more useful than a TV, and these days netbooks provide all that functionality for the same as a kitchen flip-down TV.

Some people have asked how I did it, so here we go. A later post will describe the even more excitingly interesting new and improved wall-mount, which solves some issues with the current one.

Choose Your Laptop

Unlike desktops, which might be suitable for a static multimedia player in a kitchen, a laptop takes up very little space and the user interface is all handily bundled together: there are no separate keyboards and mice with cables to clutter and clean or batteries to replace.

Ideally the laptop should have ports for power, ethernet and audio on the side or the rear for tidiness. I used external speakers to get a bit of volume when the frying is particularly loud (although I hear there are quieter ways of cooking, apparently few of them involve lard).

Netbooks will provide enough processing power for ordinary multimedia needs, but bear in mind some of the screens might be annoyingly small when viewed across a kitchen.

Touchpads seem better than nipples for controlling the mouse pointer, particularly as they are easy to wipe down if you use them with inadvertently greasy or wet fingers.

Choose Your Location

I had my house wired for ethernet and telephone while having the electrics redone, and so I had power, ethernet and telephone sockets put in below where I wanted the laptop to be. If you need to use existing connections then obviously you’ll be a little more limited; wireless can help but a nearby power socket is needed if you don’t want draping cables getting greasy.

Ideally it should also be in your line of sight while preparing food. However I have it on the wall opposite the work surface – usually I’m listening to something rather than watching it, and it means it’s clear of all the organic mess that is associated with my cooking.

Finally check the height on the wall against the laptop’s vertical field of view. Most screens can be seen quite well from left to right, and from above, but some don’t look at all well from ‘below’. So if it’s going to be placed flat against the wall (as below) then it’s best to put it a little below eye height of the shortest viewers.

The Simple Fixed Mount

I wanted to make sure the laptop was removable in case I wanted to take it away, and that when not used it should fold up.

Most laptop clamshell cases will only open around 120 degrees (as below) which is handy as it means that if you fix the top screen shell to the wall the bottom keyboard shell will stick out at a convenient typing angle. If it does not, then a backing support might be required.

This mount then is a simple board about the size of the screen, with angle brackets to ‘clasp’ the screen, and a lip at the bottom to support the weight of the laptop. The whole thing is screwed to the wall:

Board Mount

In this case, as a temporary fixing, it was made with an old bit of chipboard and some battening screwed to the bottom of it as the lip.

The laptop then slides down from the top so that the angle brackets ‘clasp’ it to the board:

Laptop Open

Bear in mind that screws into chipboard from the end like this, or into end-grain of wooden board, have weaker holds along the screw than when screwed in across the grain. That shouldn’t be a problem as there is little strain that way.

Since the angle brackets are thin, the laptop can then be closed up out of the way:

Note that some laptops might not have a conveniently ‘square’ bottom to the upper clamshell, and so the lip might not be sufficient. Make sure you buy one that does. Make sure you read all the instructions before starting.

And tidy up

This is a tatty temporary mount and looks it. Using a wooden board rather than chipboard, sanded slightly to round off the corners and painted the same colour as the wall would make a significant improvement.  Similarly painting the angle brackets to match the laptop.

Finally the base of the laptop is pretty ugly; it’s fine for a batchelor’s gadget-infested pad, but I would be interested in any suggestions as to how to tidy that up a little.


Incompetent Systems

December 7, 2009

A few years ago I worked on an excellent research project called AstroGrid.  Nearly twenty commercial software engineers were to build a distributed astronomy data analysis toolkit to support astronomers, as part of an international global effort.

Some astronomers said they had seen it all before and remained skeptical, though their enthusiasm to help was not apparently diminished. I poo-poo’d them; I’d come from developing satellite control center software. I knew how to deliver software that worked.

We had a bunch of bright engineers, who worked hard and produced some pretty good stuff – sometimes very good stuff.  In my case, obviously, it was amazing stuff.

And after two years of quarterly iterative releases we’d still delivered no usable product.  There were a few applications deployed and used here and there, and some proper new science forced through as example test cases, but nothing that astronomers couldn’t have knocked up themselves in a few days of scripting. Sometimes they already had.

Lessons Learned

Forty man-years largely wasted and the project continued in the same vein – what was wrong?

There are lots of technical reasons: poor and shifting requirements, contradictory overall objectives, very little actual commercial experience in the team, unsuitable release procedures and version control, immature support tools, and so on.

But really these are all messing about in the weeds, looking for specific problems and specific someones to blame. Why did these technical problems exist, and why weren’t they resolved?  How did they persist – for so long?  More importantly, given there’s nothing new about these problems, why did they exist in this particular project and its organisation in the first place?

Importantly, I can’t think of any of the staff who were incompetent, and that includes the project manager and chief tech, which are roles that are sometimes (and sometimes should be) held responsible for project process. In this case though, while I disagreed with some of the activities (particularly the release process)  they were hard working and experienced, and yet still we produced nothing. For years.

Competent People working in Incompetent Systems

Imagine a new coal mine owner, who pops down the local and employs a bunch of brawny lads to mine the coal. He pays them for the amount of coal they hack off the coal face, leaves a foreman in charge to handle pick axe handle repair and pay and so on, and settles down in the now peaceful pub for a quiet pint, and waits for the money to roll in.

Within a few days the miners have pushed a little coal out of the mine to make room to get to the coal face and swing a pick, but little else.  Even if we assume they collaborate with each other (people are sociable and to some extent self-organise) to avoid bringing the roof down, there’s no incentive to get coal out and sold, just to get it far enough out to make room to hack more off the wall.  The targets, the incentives, the organisation, are all useless for the owner or any of the cold shivering pensioners waiting for the three lumps they can afford.

More importantly (because we often do things wrong, it’s nothing to be overly ashamed about), there is no local remit to make the changes required to make it useful. The foreman does not have the budget, the incentive or the executive power to change the organisation or targets to make the mine productive.

Someone somewhere can generally be found to make suitable changes. In this case, someone could pop down the pub and find the owner and tell him what’s up. But why bother? Time away from the coal face is time not earning. And who knows what the owner is like, maybe you’ll get fired for disturbing him. Again, there are barriers to improvement.

It’s an incompetent system, staffed by competent people.


What a lovely jargon word. But quite appropriate: who really is supposed to look after projects to make sure they get the support and training and the right staff at the right point, the right checkpoints and feedbacks and incentives?

In the commercial world near the marketplace these are normally fairly straightforward: money is the incentive. In order to make money, you have to provide someone with something they are willing to pay for. The focus is on that delivery of usefulness. So there are ‘automatic’ readjustments that come from that focus; if the miners were being paid for coal sold rather than hacked off a coalface, they would likely organise themselves into some sort of suitable structure and work process to deliver that coal.

As we step further away from the marketplace – to R&D projects in the commercial world, or academic research – getting these incentives right is trickier. Long term benefits of blue sky research are not only hard to define, but if you don’t have a good handle on some kind of target then you can’t differentiate between lack of delivery because we haven’t worked enough yet, and lack of delivery because the system is squashing any progress.

The Tools

Here I assume that pick axes (or more advanced machinery) are available. That people share a language (which isn’t always the case) and so self-organising is feasible. That currencies are in use, and so on.

When the tools are not avalable, the system isn’t the failure point. For example, evidence-based medicine has spread widely only relatively recently; it was hard to build systems for Good Medical Treatments without it.

Competent Systems

Competent systems don’t always succeed. Competent systems encourage success. Importantly, they have the feedback mechanisms that drive change from failures (and sometimes success) in order to direct effort to more success, rather than let that effort dissipate uselessly, or even have it directed unintentionally harmfully, as Incompetent Systems do.


Handy Basic PM Checklist

November 21, 2009

John Brinkworth of Serco Consulting wrote a useful article in the rather patchy magazine Project Management Today. Most of the articles there are fairly superficial, or thinly disguised advertising for business tools or consultancy firms, but that doesn’t make them useless.

John’s article dresses up ten basic rules for managing your project well, at least technically, and I think they make a good checklist. Here’s my own slightly modified and heavily cut down version:

Objectives: Make sure you know what you’re doing, unambiguously, and measurably which means making sure they are specific enough to do so. Prioritise. Make sure everyone knows what they are.

Stakeholders: Engage with external experts, buyers, end users & their managers, maintainance staff & their managers, and suppliers.

Resources: What have you got for staff, budget, space, equipment, training? For how long?

Deliverables: Related but not quite the same as objectives, know what you actually have to deliver and why, and which are more important than others.

Schedule*: Plan the work in short enough pieces that establish a tempo, that keep people’s eyes on targets that are close enough to be felt, to give feedback as they are reached, to let everyone understand that progress is being made, and to correct for problems early rather than late.

Quality: The engineer’s usual prime focus, so I won’t expand here.

Change: Stakeholders will change their mind and need new things, different things. Your team will find some things harder than expected, some easier. Track change requests and their acceptance (or not) so you know what the new changed status is of all the other factors, such as deliverables and acceptance criteria.

Pragmatism: Work in the real world. Wishful thinking is fine but don’t let it affect the tasks. Hedge optimism. Mitigate pessimism.

Acceptance: Stay close to your stakeholders, particularly the ones with the cash. Know what’s acceptable and make sure it’s measurable. Understand what’s good enough: you may well – and should – aim for higher than this, but you need to know what will be ‘all right’ to fall back on.

Clean Finish: When it’s all over, tidy up. Give the team a finish marker (a party, gifts if they deserve them, a short speech thanking them). Feed the lessons learned sensibly into company doctrine. If it’s successful, make sure all concerned are aware of it, make sure it’s written up well and appropriately in staff CVs and their appraisals, in the company newsletter, in customer publications, in press releases.


* Steve Smart at Logica’s Space & Defence Division once told me “There are three factors that you always need to balance: cost, quality and schedule. Managers tend to concentrate on cost, engineers on quality. But the important thing to the customer is generally schedule: delivering something that is not quite right and costs a bit too much is much more preferable to delivering something that is late; because at least they get something they can use”