Archive for the ‘Information Management/Exploitation’ Category

h1

“Just Integrated Enough” – Coherence vs Agility

September 27, 2012

Look around any enterprise and we will see duplication, unnecessary redundancy, specialized isolated systems that do similar things to other specialized isolated systems, and gaps between information systems that people have to bridge. “Aha!” we cry “There is waste here – these things cost us money, and time, and attention. We must Do Something about it”. We set up a central integration committee with the power to set and enforce standards, and so the enterprise’s developers build systems that inter-operate across the enterprise, and so these costs disappear.

Sadly, setting standards is not free. It costs money, but it also costs time, it costs attention, and it costs innovation – just the things we are trying to save. If we are not careful, we can cause more delay, cost and distraction than we save. So when we consider the role of central coordinating boards – because we know we need them, completely ad-hoc chaotic disconnected systems are also harmful – we need to understand these costs so they can be weighed against the costs of the activities we want to reduce. And of course, we must bear in mind that this weighing activity itself costs money, time and attention…

For a central integration board to have the power to force integration it essentially must have the remit to dictate what a project must do; the project is not allowed to complete until it has satisfied the board in some way.

This essentially means the board becomes yet another veto-ing ‘touchpoint’ for any proposed change. It means time and effort has to be spent by the board understanding the nature of the integration needs of the new system, and the new system must ‘fit’ into the concepts of the board – which is often far removed from the task at hand. As the board personnel rotate through, certification will pause as the new staff get to grips with the situation. The board not only acts as yet another barrier to change (many enterprises already have ‘too many’ stakeholders with the ability to veto work), it acts as a bottleneck to the changes it finally approves. This doesn’t just take time, it takes brain power that can be used elsewhere.

As an example, consider “Bottom up” standards. These are driven by the requirements of the task operators (the people carrying out the work) and their need to communicate. The requirements are collected and collated, and a central standard can then be specified. The system developers can then submit their interfaces to be certified by the central board, and once approved can then implement systems that fulfill those specifications. Even in the best cases, this imposes delays. In the worst cases – where the needs of the task operators change before the previous needs have been fulfilled – it never completes.

Each round-trip generates another standard, all versions of which must be implemented by the appropriate systems to ensure backwards compatibility with those who have not yet changed.

This all costs money as well as time. The requests and approvals (and refusals, and resubmissions, and…) cost enterprise attention – it distracts the enterprise’s experts from the work they are being employed to do. The extra barrier reduces the willingness of people to experiment, to adapt, to innovate. Except, possibly, in ways to bypass onerous central standards committees.

(“Top down” standards on the other hand are nearly always inappropriate to the needs of the task operators, and system integrators resort to workarounds and misuses of the standards, leading again to extra work and extra delays – and increasing disconnection between the shared, documented interface protocols and the actual ones).

To avoid these problems it can be tempting to be abstract, to be vague, to provide overarching guidelines and approaches. The result of which is a set of compliance requirements that are so abstract that they do not inform or support integration efforts, but still require… compliance.

To provide core interoperability with the freedom to innovate, ‘extensible’ standards can provide the best of both worlds. Sadly, without great care, they can also provide the worst of both: the delays required to approve integration work, and a chaotic mix of incompatible extensions.

When XML first arrived it was hailed as the new interoperable standard that would – at last – mean that anything could talk to anything else. As a format it does indeed get around many of the issues with bespoke binary formats (while introducing some others), but even with a schema it solves only the format problem, not the rather more critical issues of agreed meaning and protocols. Without an agreed meaning of the data and what is required and what is not, a common format is useless. Indeed, format has been the most trivial issue in interfacing – even at low, detailed, levels – for some time. The same applies to higher level integration. The media tends to be secondary to the need to agree meanings and ways to resolve misunderstandings and uncertainties.

If the tasks and organisation structures remain similar for long periods these issues tend to disappear. For example, the coalition forces in Afghanistan have a highly complex impressively integrated network of collaborations as there has been time to crystalise collaboration TTPs around the tasks. Strong standards that have emerged from years of work are welcome in this environment. Strong policing boards that govern such stable systems are not so welcome if they prevent those systems from adapting quickly when the tasks change, as they are likely to between contingency operations.

The Solution
The solution is, of course: ‘it depends’.

The first step is simply to recognise that the costs exist, and therefore to identify them and compare them. Costs might be delivery time, money spent up-front, money spent maintaining, effort (manpower) spent maintaining, various qualities of the deliverable (reliability, etc) and so on. So you might decide that removing redundancy is worth some extra design time, for example (although this is rare: “Sooner” usually has much more weight than “Better”). Or you might decide the opposite, in which case the calculation can be bundled in with the programme oversight so that someone doesn’t come along and go “Oh look there’s some redundancy [ie we are wasting money and time], we should do something [ie spend money and time] about that”

By making these comparisons we can hopefully avoid the ‘flip flop’ between strong integration committees that force everything to a grinding halt, and strongly ‘agile’ approaches that result in unwelcome and (worse) unexpected gaps between important systems at critical times.

Between these extremes of ‘Ultimate Coherence Never’ and ‘Agile Incoherence Now’ should be sweet spots – or at least not too bitter ranges – of arranging for ‘Just Integrated Enough’.

For example the integration board could be a broker rather than an enforcer: “So you want to connect your 3d terrain data to that 3d terrain data? Well such-and-such did that, they already have an interface protocol/specification”. Having a proven interface specification can improve the speed and reliability of a new integration so is attractive to integrators if it is appropriate, and the people who can judge that are the integrators not a remote board.

The board could be a ‘goto’ place for a repository or library of existing emerging interface protocols, storing and noting which systems use which protocols and so being able to ‘tweak’ and ‘adjust’ emerging standards rather than dictate them.

Ideally, the board should be a place that creates the right incentives across the enterprise so that people sort out the cross communications amongst themselves, in the same way as complex supply chains form and reform along the aligned incentives of money.

Enterprise-wide interoperability requires enterprise-wide governance, but governing bodies should set policy, not attempt to police.

This essay is a result of discussion with several people at the RUSI information management conference in September 2012

Advertisements
h1

Quality & Peer Review. Again.

March 10, 2011

The House of Commons Science & Technology Select Committee are holding an inquiry into Peer Review.

Like previous investigations, they focus on peer review as a vehicle for quality assurance and scientific discourse, rather than starting with what they want and working backwards. As peer review occurs after the work has been done, it simply cannot be used to assure quality of the work – although it may be used to assure quality of the published paper

Instead the government could develop the quality controls already being introduced by academic institutions, and use these to assure the quality we would like to see in studies used to inform policy.

Today I submitted a short paper to the inquiry saying this in more detail:

MartinHillForSTC3

(Word doc, about 3 pages in large type, with a few minor language corrections from the submitted document)

h1

Expert Confidence and Probability

October 22, 2010

Tyre marks on A4042

In the photo above are two parallel marks that we associate with tyre material scraping off onto the tarmac when wheels ‘lock’ under braking.

The marks are about a hundred meters long and are reasonably even until the last few meters. An emergency brake from around 80mph would use that distance.

The vehicle was under steering control however; there is a small weave a few tens of meters after it starts, and the vehicle pulls into the parking bay that the photo was taken from. Wheels that are locked cannot be used to change direction, so these were not left by the front wheels.

It may not be clear from the image, but the tyre marks are continuous – there is no sign of the stuttering on/off/on/off that you get with Antilock Braking Systems.

The controlled steering, and reasonably even strength of the mark along most of the track, suggests tyres being dragged along under power until the vehicle could pull into a layby.

Commercial lorry trailers have emergency or parking brakes that are applied when the control coupling to the tractor fails. This stops trailers from running away if they come apart from the pulling tractor, which could be particularly invigorating for other road users on down-slopes.

The Conclusion

We can reasonably conclude that the brakeline (probably an air coupling) was not properly secured and came apart under vibration, the trailer tyres locked, the driver swerved slightly under the sudden pull, then turned into the layby to deal with the problem, and once it was dealt with drove off.

Now how do we rate confidence in that conclusion? What do I mean by “reasonably” when I say “we can reasonably conclude”? What do I mean by “probably” if I say “This is probably what happened”?

The Confidence

We may be tempted to use Frequency Probability terms: “I am 90% sure”. If we were analysing many of these marks across the country, and were able to use CCTV footage or track down witnesses to find out what actually happened, then such terms would be appropriate. But in this case we have no priors, only one situation to assess, and no clear view of the possible solution space. Framing confidence as a “there is a 6 out of 10 chance that this is true” is not helpful and probably (heh) misleading.

The narrative in the conclusion above describes why we came to that conclusion and provides the reader with their own confidence: you can see the reasoning and check from other sources and experts that the story is indeed plausible. Confidence in plausibility is a requirement for confidence in the conclusion, but it’s not sufficient.

We don’t – and possibly cannot – know of all the other possible solutions to the problem. We can sit around in a group and brainstorm some other options some of which will be more likely (a caravan rather than a lorry trailer) than other believable ones (a rear-steer car braking hard) or other strange ones (A practical joker with a pot of paint who goes around painting marks on roads). We are still left with the ‘unknown unknowns’ – the large space of possible, plausible and likely solutions that we haven’t thought of.

The Need

This is obviously a problem when trying to convey expert opinion to non experts, particularly decision makers. Decision makers have to weigh expert opinion from many different experts that often do not compare directly (hospitals vs education), and so of course want to know how sure each expert is.

[There is also the huuuuge subject of how framing information changes decisions]

The decision maker may also want to know something about the expertise of the expert: is the conclusion above from a deskbound road safety researcher, from a brake designer, a tyre designer, a traffic officer, a truck driver, a passing motor enthusiast, or a tree-hugging homeopath? Or someone with experience in all of these?

Not only is the ‘quantity’ of expertise of the expert important, but the quality of it. A narrow-minded expert may read the opinions of other experts incorrectly (do Antilock Braking Systems actually stutter?) or incompletely (if most do, are there some that do not?). An expert from a small community may also be ‘warped’ by prevailing attitudes in that community; biases in the expertise will frame the way the expert approaches the likely solutions.

My friend, ex-colleague and Very Difficult Person, John Salt, asserts that expertise is the internalising of knowledge in a particular domain, and that that very internalising makes it difficult to understand how well conclusions are formed, let alone describe them.

I am more optimistic, but I don’t understand why and certainly can’t explain it.

h1

Life & Death Decisions using Sparse, Unreliable Evidence

August 23, 2010

Wahey, my first ‘paper’ (here, PDF) has been reviewed and accepted for the European Conference on Information Management and Exploitation in early September.¬† It looks rather lonely on its own on my shiny new publications list page, but the title makes up for it.

It became a bit of a brain dump for everything I could think of that was relevant, so it’s a broad sweeping outline with plenty of pieces to pick up and work on in more detail.

Many thanks to John Salt for helping to write it, my parents for their initial review, and friends especially Tom Targett and Eric Titley at the ROE for proof reading the later versions.