You are at: Wagner
Home > Technologies > Mission Planning > Search Optimization
> Scientific Search Planning
Scientific Search Planning
Scientific search planning is based on the very simple notions
of probability not much more complicated to understand than are
simple gambling strategies. Anyone who can grasp the concept
of the probability of a certain roll of a die can understand
Location Probability and Clues
Probability of Detection
Probability of Success (POS)
Modifying Location Probability after
Probability and Clues
The first probability concept has to do with where you think
the target is before you start the search. This is called probability
of location and is roughly the possible error associated
with this initial guess.
Every clue has some error attached because, were there
no error, there would be no search problem: you would just go
to the target and find it immediately. The problem is that, although
there are always clues and their most likely (or mean) locations
are given, you rarely will be given the value of the expected
error and so you will have to deduce the location error from
If we have two or more clues, we may combine them but we must
pay careful attention to how we do it. The correct way is not
|Suppose that you
are given two clues to the location of a wreck, both of which
were derived from very accurate navigation that gives one-half
mile accuracy, and that the two clues are one one-quarter mile
If you believe that both clues are valid, then the mathematics
tells us that the most likely position of the target is midway
between (or the mean of) the two clues. Surprisingly, it also
tells us that the error in position is smaller than the error
in either clue alone.
This is the principle of repeated measurement (the carpenter's
rule is, "measure twice, cut once"). If you repeatedly
take independent measurements of the same thing and average your
readings, you will continue to reduce your error, almost without
limit. So, usually, the more clues you have, the smaller the
uncertainty area of the target's location.
On the other hand, suppose you are given two clues with the same
half-mile accuracy, but they are 100 miles apart. In this
case, we cannot imagine that the most likely position of the
target is halfway between them and, of course, it is not. The
reason is that the odds against a valid observation's error being
100 times as large as its probable error are enormous and therefore
we must conclude that there is something wrong with one or both
of the clues.
In this example, then, although we have two clues, they are
not valid, independent measures of the same position.
In fact, we call them mutually exclusive, that is, one (or both)
must be false. A false clue is one that is badly flawed in some
way other than random "noise." For instance, if you
take a Loran reading and transpose two digits, then the resulting
position cannot be expected to behave like a correct reading
with normally distributed error. Depending on the digits, it
could be hundreds of miles in error.
You may suspect that a clue is false, either because it conflicts
as in the example above or because of some other information
about the source of the information. In order to use such a clue
properly, you need to estimate the probability that it is false.
In cases of conflicting clues, we often do not know which is
flawed and therefore cannot throw either one away. However, we
may have reason to believe that one clue is more likely to be
valid than the other. If we say that a clue has a 90% confidence,
we mean that there is a 10% probability of the clue being false.
In the case of the two-clue problem above, we have not one but
two likely positions and our probability map should show two
high-probability circles 100 miles apart. You will need
to search both small areas but not the enormous region in between.
Earlier search planning software treated all clues as mutually
exclusive and required one to assign a weight to each of them.
The recommended search plan always included all the clues and,
when you added another clue, the search area expanded to include
that clue. But if all clues are valid, according to our knowledge
of repeated measurements, adding another valid clue should always
make the search area smaller.
This picture shows a
probability map created from three clues, all of which have less than 100%
confidence. The majority of the target's probability is contained within
the very small dark area in the center of the map but some of the
probability is spread over a very wide area. This is because one of the
clues has a very large error distribution and there is some probability
that clue is the only true one.
A recent real-life example shows how important this can be.
In 1980, the UK Oil/Bulk Ore carrier DERBYSHIRE
sank in a Typhoon off Okinawa without a distress signal and with
the loss of all hands. Many in the industry suspected that construction
flaws common in ships of the class were a factor in the sinking.
Without evidence, however, the British government refused to
re-open their inquiry and claimed (based on the analysis by some
experts) that a search would be too expensive. In June, 1994,
the International Transport Workers' Federation sponsored an
underwater search for the wreck but their budget was extremely
The salvage company, Oceaneering Technologies, Inc., contacted
Wagner Associates and we developed a much smaller search probability
area that had a high probability of success, based on three high-confidence
clues. Our search was actually used and the target was found
very quickly (on the first day).
The second concept relates to the probability that, if you
come within range of the target with one or more sensors, you
will detect it. This is called Probability of Detection (PD).
The basic concept is that you can define a sensor's performance
according to the distance at which it passes a target (called
closest point of approach, or CPA) and the probability that it
will detect the target at that passing distance. This can be
plotted on a simple graph, called a lateral range curve.
If you are using multiple sensors on a single platform, then
you can combine curves from all the sensors into a single curve
for the platform as a whole. In very large efforts, it is often
wise to spend a few hours or days conducting experiments with
practice targets to calibrate the effectiveness measures.
of Success (POS)
You can develop a predicted POS curve as an important planning
tool in helping to decide whether to search in the first place
and how much search expenditure to budget.
With such a chart, the planner can see that there is good
return from the early search but, that as time goes on, there
is a diminishing return on effort.
Based again on the probability mathematics, the expected amount
of effort to find the target is achieved when the measured POS
reaches a value of about 2/3. This reflects the fact that it
will take more effort to achieve POS results later in the search,
again because of diminishing returns.
If we can estimate the POS for any search, we could construct
a large number of candidate searches and choose the one that
has the highest predicted POS. This is called optimal search
planning and the computer can do such a selection very quickly.
Wagner Associates analysts can use the MELIAN II system
features to create search patterns and also predict their Probability
of Success (POS). In addition, given a specified search duration,
MELIAN II will select the optimal rectangle search in terms
of center of search, size of box, and number of legs.
The MELIAN II software takes care of all the details of
the turns, including tow-induced turning delays, and allows the
planner to select the order of the legs of the pattern. Later,
with the MELIAN II computer on board, the leg positions can later
read the GPS and feed into the automatic pilot to steer the vehicle
through the pattern.
Probability After Unsuccessful Search
The fourth probability concept relates to modifying a probability
map for unsuccessful search. Sometimes searches are planned in
stages, where a short, high-probability search is followed by
one or more much longer searches. The idea here is to try to
get a quick success and save the expense of a more exhaustive
search. Such a strategy takes advantage of the decreasing returns
and often succeeds early, much to the advantage of schedule and
We can apply the same optimal search techniques for a sequence
of searches as for a single search. But for each successive search,
we must modify the probability map based on the probability of
detection achieved in the earlier searches. This step is based
on simple probability logic: if one looks for a target in one
area and don't find it, then the probability that the target
is there should be lower after the search and the probability
that the target is everywhere else should be higher.
Suppose that a target is in
one of three boxes, with probabilities .5, .3, and 2.
Now, suppose we search in box
1 with a conditional probability of detection of 60%. That means
that if the target is in box 1, we have a 60% chance of detecting
If we don't find it, we should
adjust the probabilities for the event of a failed search. Multiply
each original probability by the failure probability (one minus
the probability of detection) in that cell. These are .4, 1.0,
The sum of the new probabilities
is now 0.7, but if we believe the target is still somewhere in
one of the three boxes, the probabilities must add up to 1.0.
So, we just divide the numbers by the new total (example 02.
/ 0.7) and get the new distribution:
Notice that the probability in the box covered by the search
is lower and that probabilities in boxes not covered in the search
For further information and questions, contact
W. Reynolds Monach.