About this webinar
Seismic coherence volumes are routinely used to delineate geologic features that might otherwise be overlooked on conventional amplitude volumes. In general, the quality of a coherence image is a direct function of the quality of the input seismic amplitude data. However, even after careful processing, spectral balancing, and data conditioning, features that are obvious to a human interpreter may not be mapped by coherence. The most common failure is when the offset across a fault aligns reflectors from different horizons that exhibit a similar waveform. Whereas human interpreters (and more recently developed deep learning algorithms) use the information content provided by shallower and deeper discontinuities along the fault, coherence algorithms are limited to mapping the discontinuities within a small analysis window that approximates the dominant period of the seismic data.
Interpreters have long observed that certain spectral components will better illuminate a given geologic feature than others. Although the phases of the previously mentioned alignment across a fault may match at the dominant frequency, they will mismatch at other frequencies, resulting in coherence anomalies. In general, lateral phase changes across the edges of incised channels are greater for frequency components above the tuning frequency, resulting in stronger coherence anomalies at those frequencies. For these reasons, one may wish to not only examine coherence computed from different filter banks, but somehow combine them into a single composite image.
There are three ways to combine the information content provided by coherence computed from multiple spectral volumes. The first approach is available in almost all interpretation workstation software. Here, the interpreter bandpass filters or spectrally decomposes the original data volume to form three or more new bandlimited seismic amplitude volumes. After computing coherence for each of the new volumes, the interpreter corenders them using either CMY or RGB color blending. Anomalies that are strong on all three spectral components appear as black, whereas those that show up on only one or two of the components appears in color. In this approach, the information content of only three components can be corendered in a single image. The second approach is to combine three or more components using a dimensionality reduction algorithm such as principal component analysis or self-organizing maps. Anomalies consistent at multiple frequencies will be represented by the first one or two principal components or by a subset of the SOM clusters. The third approach is less commonly available and requires modifying the coherence algorithm to add the covariance matrices of each of the spectral components prior to computing coherence. Regardless of the approach used, the computation time increases linearly with the number of components computed.
Discontinuities that are consistent across multiple spectral bands provide strong anomalies in multispectral coherence, filling in fault gaps seen on coherence volumes computed from the original broadband data. Because random noise provides inconsistent discontinuities across different spectral bands, its coherence expression is suppressed. Although a single multispectral coherence volume is easier to interpret than multiple bandlimited coherence volumes, some of the information content provided by the bandlimited coherence volumes can be lost. If this information is important, the interpreter should revert to RGB blending of the more important volumes.
In this webinar, the presenter will show examples of how multispectral coherence can improve the delineation of both tectonic and stratigraphic discontinuities on 3D seismic data. He will also show how the same concepts can be generalized to combine the information content of multiple azimuthally and offset-limited data volumes.