Changes

Jump to: navigation, search

Indicators

181 bytes added, 16:35, 26 June 2014
no edit summary
* try to find out the biggest issue that needs addressed in a given situation (i.e. which indicator is most requested) and decide if this is a feasible problem or not. If not feasible then ruthlessly set it aside and look at next most urgent. What we choose _not_ to do is going to be important. (ref to Steve Jobs: "there is no shortage of good ideas, but you have to say no to all of them if you are going to work on the best ideas"). Don't be distracted by unrealistic expectations.
* in looking at indicators from other parallel development areas, we should focus not on the "technical" fit of the indicator to mine action as our first criterion, but at how well the indicator is accepted in the field. Build up a library of these and then look for common factors and characteristics in why and how they are accepted (and also maybe the timescale and process from first use to acceptance). If our main problem is perception then that must be addressed analytically as far as possible.
* accept that indicators are partial and indicative and generally only indicative of anything when it is too late, but that this is _far_ better than the alternative of working blind, hearsay and widespread accepted myths and lies about effectiveness and impact. This is a tough challenge as most people will be far happier to continue with what they have always done even if it is wrong than to change to something which we know is only partially accurate.* remember that "doing the right job" is more important than "doing the job right" and so look at how other sectors address the "right job" issue.
== References (external links) ==__NOEDITSECTION__
60
edits

Navigation menu