Expert Confidence and ProbabilityOctober 22, 2010
In the photo above are two parallel marks that we associate with tyre material scraping off onto the tarmac when wheels ‘lock’ under braking.
The marks are about a hundred meters long and are reasonably even until the last few meters. An emergency brake from around 80mph would use that distance.
The vehicle was under steering control however; there is a small weave a few tens of meters after it starts, and the vehicle pulls into the parking bay that the photo was taken from. Wheels that are locked cannot be used to change direction, so these were not left by the front wheels.
It may not be clear from the image, but the tyre marks are continuous – there is no sign of the stuttering on/off/on/off that you get with Antilock Braking Systems.
The controlled steering, and reasonably even strength of the mark along most of the track, suggests tyres being dragged along under power until the vehicle could pull into a layby.
Commercial lorry trailers have emergency or parking brakes that are applied when the control coupling to the tractor fails. This stops trailers from running away if they come apart from the pulling tractor, which could be particularly invigorating for other road users on down-slopes.
We can reasonably conclude that the brakeline (probably an air coupling) was not properly secured and came apart under vibration, the trailer tyres locked, the driver swerved slightly under the sudden pull, then turned into the layby to deal with the problem, and once it was dealt with drove off.
Now how do we rate confidence in that conclusion? What do I mean by “reasonably” when I say “we can reasonably conclude”? What do I mean by “probably” if I say “This is probably what happened”?
We may be tempted to use Frequency Probability terms: “I am 90% sure”. If we were analysing many of these marks across the country, and were able to use CCTV footage or track down witnesses to find out what actually happened, then such terms would be appropriate. But in this case we have no priors, only one situation to assess, and no clear view of the possible solution space. Framing confidence as a “there is a 6 out of 10 chance that this is true” is not helpful and probably (heh) misleading.
The narrative in the conclusion above describes why we came to that conclusion and provides the reader with their own confidence: you can see the reasoning and check from other sources and experts that the story is indeed plausible. Confidence in plausibility is a requirement for confidence in the conclusion, but it’s not sufficient.
We don’t – and possibly cannot – know of all the other possible solutions to the problem. We can sit around in a group and brainstorm some other options some of which will be more likely (a caravan rather than a lorry trailer) than other believable ones (a rear-steer car braking hard) or other strange ones (A practical joker with a pot of paint who goes around painting marks on roads). We are still left with the ‘unknown unknowns’ – the large space of possible, plausible and likely solutions that we haven’t thought of.
This is obviously a problem when trying to convey expert opinion to non experts, particularly decision makers. Decision makers have to weigh expert opinion from many different experts that often do not compare directly (hospitals vs education), and so of course want to know how sure each expert is.
[There is also the huuuuge subject of how framing information changes decisions]
The decision maker may also want to know something about the expertise of the expert: is the conclusion above from a deskbound road safety researcher, from a brake designer, a tyre designer, a traffic officer, a truck driver, a passing motor enthusiast, or a tree-hugging homeopath? Or someone with experience in all of these?
Not only is the ‘quantity’ of expertise of the expert important, but the quality of it. A narrow-minded expert may read the opinions of other experts incorrectly (do Antilock Braking Systems actually stutter?) or incompletely (if most do, are there some that do not?). An expert from a small community may also be ‘warped’ by prevailing attitudes in that community; biases in the expertise will frame the way the expert approaches the likely solutions.
My friend, ex-colleague and Very Difficult Person, John Salt, asserts that expertise is the internalising of knowledge in a particular domain, and that that very internalising makes it difficult to understand how well conclusions are formed, let alone describe them.
I am more optimistic, but I don’t understand why and certainly can’t explain it.