Analyses of many of the existing datasets have not shown overwhelming
evidence of trends - even in parameters which we believe to be changing.
Often
the conclusion is that we simply have not collected enough data. But how
many data points are needed to detect trends? How can we distinguish
between a
situation of having too few data to detect a trend and a situation in which no
significant trend exists? How can we optimize our current monitoring to
be best able
to detect expected trends? Numerical methods are available for addressing
these questions. The methods must include the inherent properties we
observe in
environmental data: spatial correlation, temporal "memory",
natural variability and measurement uncertainty.
Results of such analyses have been quite surprising. Among the results
are that some locations are inherently better for detecting trends than others--and
these
locations are difficult to guess. Some parameters, such as total column ozone
and CO2, are particularly conducive to trend detection. Accuracy,
precision,
continuity in datasets, and spatial resolution or number of monitoring stations
are quantifiably critical in trend detection and are among the issues that must
be
well understood for trend analyses.