Spatial Resolution Digital Imagery Guideline

This guideline discusses commonly used spatial resolution metrics illustrated with image simulations. The purpose is to assist the remote sensing community - image users and buyers, as well as image providers and even sensor designers - by visually demonstrating how certain image specification parameters affect spatial resolution. Several examples of imagery show first-hand how these parameters influence image quality. 

The guideline provides a high-level tutorial describing how spatial resolution is specified, as well as a catalog of images so the user can relate these specifications to image products. The general user should be able to easily cover the material included in the main text. Sidebars provide greater detail to the interested reader, and references and hyperlinks to comprehensive explanations and discussions provide more details for those desiring an in-depth understanding.  The main text discussion is followed by a set of simulated image chips, which are available for download so that the user may adjust their display (zoom/stretch). 

This guideline is applicable to imagery that is:

• Acquired with passive electro-optical (EO) imagers operating in the visible spectral region

• Both color and monochrome (panchromatic) 

• Nadir or down looking 

• Native and not sharpened

• High or moderate resolution (between 5 cm and 10 m GSD)

• Acquired with imagers that have well behaved system-level symmetrical or Gaussian-like shaped Point Spread Functions (PSFs)-discussed later.

• High Signal-to-Noise

Why Use Simulated Imagery?

Final image product resolution depends not only on a camera’s properties, but also on how an image is acquired and what image processing algorithms are applied. Different platforms - aircraft, satellite, small Unmanned Aerial Vehicles (UAVs) etc. -- and different installations affect imagery differently. Flight conditions such as sun angle, haze, and height above target area etc. also affect image quality. Comparing imagery acquired and processed under one set of conditions with imagery acquired and processed under a different set of conditions makes it difficult or even impossible to separate camera specification effects from image processing and image acquisition effects. By simulating imagery, we remove image acquisition and processing variability from this discussion. While the imagery shown is for a specific set of conditions, the simulations only vary spatial resolution parameters for that set of conditions. 

What is Spatial Resolution?

Spatial resolution determines the smallest discernable feature within an image (Holst, 2006). Often, the spatial resolution of remotely sensed imagery is described only in terms of pixel spacing, or Ground Sample Distance (GSD).

Example imagery with a 15 cm GSD

Same image area at 1 m GSD

While significant, GSD is only one aspect of spatial resolution. Two other important features that affect image quality and interpretability are image sharpness, or blur, and image noise, often referred to as Signal-to-Noise Ratio (SNR). Here we will highlight the effect of image sharpness and GSD on image quality.  SNR effects will be discussed on a separate page.  Within this guideline, all simulated imagery has a high SNR that exceeds 100 throughout most of the scene (except for shadows).

Two images can have the same GSD but different levels of image sharpness and look very different. 

Imagery at 15 cm GSD; Image is in focus. Image Sharpness “Good”. 

Imagery at 15 cm GSD; Image is blurry. Image Sharpness “Poor”. 

Why is Spatial Resolution Important?

Spatial resolution can impact the usefulness of a data set for different applications. Some applications may involve identification of small objects such as man-hole covers, letters on a roadway, or roof damage after a storm, while others may focus on large features like agricultural fields that cover a large area.

An understanding of the effects of GSD and image sharpness can guide data acquisition parameters, such as acquisition height or image stabilization requirements. Spatial resolution can affect a person’s ability to extract useful information from imagery. Understanding spatial resolution can help optimize the amount of data needed, aid image quality assurance, and even drive camera design. 

Sensors are being developed with improved optics (detector limited sensors). Image sharpening algorithms that improve image quality also exist. Sharper images, however, are not always better images. Data sets that are detector limited can be over-sharpened and will appear pixelated, or aliased. Aliasing causes fringing patterns, such as Moiré patterns, that are not physically present in a scene to appear in an image. 

Non-aliased imagery at 15 cm GSD.

(Zoom shows roof detail.) 

Aliased imagery at 15 cm GSD.

(Zoom shows roof detail.) 

Remote sensing image end users or data providers may opt to trade GSD for image sharpness when acquiring data sets. A blurrier data set that has a smaller GSD could provide the same level of detail as a sharpened, slightly aliased data set with a larger GSD, but the latter may be preferred because it may cover a larger area of interest. The following table summarizes some of the image acquisition aspects that are affected by image sharpness.

Additionally, spatial resolution is a component of the National Imagery Interpretability Rating Scale (NIIRS) used by the National Geospatial-Intelligence Agency to assess image utility. 

How is Image Sharpness Defined?

Image sharpness can be defined in several different ways. But before these can be described, first we need to understand what happens when a remote sensing imager acquires an image. 


 Image Formation

Adapted from Schott, John R. Remote sensing: the image chain approach. Oxford University Press on Demand, 2007. 

The image chain 

When an electro-optical imaging system measures light from a single point, or point source, the light is not acquired by a single detector. Rather the light is spread over and measured by several detectors. This point source system response is a result of a measurement system’s detectors and optics and is called the system Point Spread Function (PSF). PSF therefore is a measure of how sharp an imaging system can acquire imagery. In practice, however, a measurement system’s PSF is very difficult to directly measure due to SNR and sampling considerations.

When an electro-optical system images a feature such as an edge formed by adjacent dark and bright areas, the PSF blurs the edge contrast in the resulting image. This process is mathematically equivalent to convolving the PSF with the edge. (Boreman 2011, Gaskill 1978, Goodman 2008). 

Targets

Specially engineered resolution targets, such as edges or tribars are often used to quantify image sharpness. Edges are particularly useful since all the appropriate information can be derived from a high quality (high SNR) edge response without incurring sampling issues. Edges are also good because they occur naturally though out many urban and agricultural scenes and, properly screened, can be used to evaluate spatial resolution. Tribars are good for quick visual checks of the minimum resolvable target size.

Sample engineered edge target and tri-bar targets (Pagnutti et al, 2010) 

Measures of Image Sharpness

Image sharpness can be defined using either spatial data (measured as a function of X and Y) using physical features or spatial frequency data (measured as a function of u and v) using Fourier analysis. When spatial data is used, the assessment is said to be in the Spatial Domain. When spatial frequency data is used, the assessment is said to be in the Frequency Domain. We describe both types of measures below. The following figure introduces several terms from both perspectives - spatial and frequency - and generally outlines how they relate to each other mathematically. 

Adapted from Schowengerdt, Robert A., Remote sensing models and methods for image processing, Elseveir 2007. 

Image sharpness measure relationships 

Image sharpness can be found using an imaged edge by measuring how quickly the image transitions from dark to bright. This can be accomplished by looking at the Edge Spread Function or normalized Edge Response. To produce an Edge Response, transects across an edge are aligned and centered, and the values are normalized to one. 

Edge Response (ER) curve generation 

In practice an edge response curve may contain many features including ripple, and overshoots and undershoots. These can be caused by the non-ideal physical condition of an edge target or by edge sharpening or other image processing algorithms used by sensor manufacturers and image providers. 

Normalized Edge Response (ER) curve 

Note: To best understand an imaging system’s performance, these measurements should be made on raw, radiometrically corrected data. Otherwise, if measurements are made using resampled, geometrically-corrected or terrain-corrected data, the spatial performance measures will also include processing effects introduced by the entire image processing chain.

Spatial Domain - Edge Response Based Measures

Several measures of image sharpness, or spatial performance, can be directly found from the normalized Edge Response. A common spatial performance metric is the Relative Edge Response (RER). The RER is defined as the slope (steepness) of the Edge Response within ± 0.5 pixel of the center of the edge. The RER represents how an imaging system responds to a change in contrast over one pixel. Higher RER values resulting from steeper edges indicate a sharper image. Lower RER values indicate a blurrier image. 

Relative Edge Response (RER) calculation 

Alternate Spatial Domain Measures

Additional spatial measures of image sharpness can be determined from an imaged edge. Taking the derivative of normalized Edge Response produces the Line Spread Function (LSF) (Gaskill, 1978; Boreman, 2001). The LSF is a 1-D representation of the system PSF. The width of the LSF at half the height (the 50% point) is called the full-width at half maximum (FWHM). The FWHM of the LSF represents the width of the integral of the system PSF in one direction. 

FWHM of the Line Spread Function (LSF) 

Note: Because a system PSF is not necessarily symmetrical, edges in multiple directions should be assessed.

Frequency Domain Measures

Image sharpness can also be quantified in the frequency domain. The Fourier Transform of the LSF produces the Modulation Transfer Function (MTF) (Gaskill, 1978; Boreman, 2001; Goodman, 2008; Holst, 2006). The MTF measures the change in contrast, or modulation, of an optical system’s response at each spatial frequency. 

To better understand spatial frequency, consider the following: Let’s suppose there is a target within the field of view of an imaging system that alternates in brightness just as a sine wave would. The target’s brightness cycles up and down with distance just as a sound wave, measured in Hz, cycles up and down with time. 

Sinusoidal target with varying frequency and its corresponding brightness profile


NOTE: Spatial frequency targets like the one shown would have to be too large to maintain for large-GSD sensors (satellites). However, for small-GSD sensors, such as those flown on UAVs, they may become more practical. 

Higher spatial frequencies to the right correspond to fine image detail, while lower spatial frequencies to the left correspond to less detail. In practice, the point at which the target could no longer be resolved would determine the spatial resolution.  

MTF is determined across all spatial frequencies, but can be evaluated at a single spatial frequency, such as the Nyquist frequency. Nyquist frequency is defined to be 0.5 cycles per pixel and is the highest spatial frequency that can be imaged without causing aliasing.  In an example at the Nyquist frequency, the target input signal and imaging sensor output response may look like the following. Dividing the output response by the input signal gives the value of the MTF at that particular spatial frequency.

Spatial frequency target and imaging sensor response at the Nyquist frequency 

The value of the MTF at the Nyquist frequency is a common measure of image sharpness. The value of the MTF at Nyquist provides a measure of resolvable contrast at the highest ‘alias-free’ spatial frequency. A sample MTF curve with Nyquist highlighted is shown below.

MTF curve showing Nyquist frequency (at 0.5 cycles/pixel) 

Images with higher MTF at Nyquist values will be sharper but may have aliasing, while images with lower MTF at Nyquist values will be blurred. Typical MTF at Nyquist values for imaging sensors range from 0.1 to 0.4.

Additional frequency domain measures of image sharpness are the MTF at Half Nyquist and the MTF50 value. MTF50 is the frequency corresponding to the 50% point of the normalized MTF curve. While used less frequently, both of these measures provide additional information about the imaging system’s spatial performance. Due to optical effects such as obscuration, which affect mid-range spatial frequencies, acquired images can have MTF curves with the same MTF at Nyquist but very different shapes, resulting in differences in image quality. The MTF at Half Nyquist and the MTF50 value provide insight into the shape of the MTF curve, giving additional knowledge of image sharpness and spatial performance. 

MTF curve showing MTF measured at several different frequencies 

Additional target frequencies alongside an imaging sensor’s output response. 

Summary of Spatial Resolution Metrics

Q-effective and a Way to Classify Imagery

One way to look at spatial resolution is to compare the FWHM of the LSF to the GSD of the imaging system. Relating the system PSF to GSD can define blur, but because the LSF is a measure more routinely used and easier to determine, we use it here rather than the system PSF.  Let’s define a parameter called Q-effective such that:

If the FWHM is approximately equal to the GSD, the imaging system is slightly aliased. If Q-effective is greater than 1, the FWHM of the LSF is larger than the GSD and therefore the imagery appears blurry. On the other hand, if Q-effective is less than 1, the FWHM of the LSF is smaller than the GSD and the imagery appears aliased. Typical imaging sensors operate with Q-effective between 1 and 1.5 (balanced).

The effect of varying Q-effective values on a point source signal is shown below.  When Q-effective is greater than 1, the signal from the point source is blurred over a large area.  When Q-effective equals 1, the blur is reduced. When Q-effective is less than 1, the majority of the signal stays within the same pixel. 


Point Spread Functions (top row) for different Q-effective values, applied to the input point source produces the output signals with different amounts of blur (bottom row) 

To further illustrate this parameter let’s look at several spatial resolution parameters associated with an image of an edge.

The following table summarizes the spatial resolution parameter values discussed above.

Image Simulations

Simulations were performed to illustrate the effect of varying spatial resolution on different features. In all cases, a data set with a smaller GSD (higher resolution) was used as input to generate larger GSD (lower resolution) image products. The coarser GSD values are sufficiently large so the finite resolution of the input image has little to no effect on the output image.  Different amounts of blur were added to simulate imagery with different Q-effective values. The blurred image was then resampled to produce imagery with varying GSD’s.

Image simulation steps 

Several “classes” of data, defined by their GSD, were simulated to cover the range of data types a remote sensing consumer might encounter. The data classes simulated were: Moderate Resolution Satellite, High Resolution Satellite and High Resolution Aerial. For each class of data simulated, 4 types of spatial performance, ranging from blurry through balanced to aliased, were produced. This table outlines the data simulations performed and provides information about the input data sources. 

Multispectral image simulation output for each sensor class and spatial performance described above are provided.  Image chips containing the features of interest listed are shown in the table below.  Each image chip can be selected (double click) to open a second window with a larger, downloadable version of the image.  Panchromatic image chips are provided in a following table. 

Multispectral Image Simulation Examples

Panchromatic Image Simulation Examples

High Resolution Example 2