logo

SAR Benchmark

The HPCS Scalable Synthetic Compact Application #3 (SSCA #3) simulates a sensor processing chain (Figure 1). It consists of a front-end sensor processing stage, where Synthetic Aperture Radar (SAR) images are formed, and a back-end knowledge formation stage, where detection is performed on the difference of the SAR images. It generates its own synthetic ‘raw’ data, which is scalable. The goal is to mimic the most taxing computation and I/O requirements found in many embedded systems, such as medical/space imaging, or reconnaissance monitoring. Its principal performance goal is throughput, in other words, to maximize the rate at which answers are generated. The computational kernels must keep up with copious quantities of sensor data. Its I/O kernels must manage both streaming data storage, as well as sequential file retrieval.

diagram
Figure 1. Block diagram of SAR system benchmark.
The Scalable Data Generator (SDG) creates and stores simulated ‘raw’ SAR complex returns. It also generates and stores templates of rotated and pixelated letters.

The Sensor Processing Stage loops until the specified number of desired images has been reached. In this stage, after reading the ‘raw’ SAR data, Kernel 1 forms a SAR image using a matched filtering and interpolation [1] method. 2D Fourier matched filtering and interpolation involve matched filtering the 2D Fourier transformed returns against the transmitted SAR waveform. Then the results are re-sampled, or interpolated, to go from a polar coordinate representation to a rectangular coordinate representation. A final inverse Fourier transform converts the results into the spatial-domain, where the SAR image becomes visibly discernible. After Kernel 1, the pixelated templates are inserted at random locations of the SAR image. Kernel 2 stores each ‘populated’ image in a streaming I/O fashion onto a grid of random image locations.

The Knowledge Formation Stage loops until the specified number of desired image sequences has been reached. Kernel 3 randomly picks a given image sequence to read, which is read through its entire grid depth. Kernel 4 computes the differences between each pair of consecutive images, and thresholds the difference image to identify locations to produce a set of changed pixels. A sub-image is formed around each group of changed pixels which is then convolved with all the letter templates. The template that produces the strongest match is then selected as the identity for the particular sub-image.

Verification of the benchmark occurs by comparing each letter/rotation that was identified to what was actually inserted at that location. The input data is constructed so that all the pixelated letters should be found with no false alarms.

The SAR system benchmark can be operated in one of three modes: System Mode (which includes both its computational kernels and I/O kernels), Compute Mode (which includes its computational kernels while bypassing its I/O kernels), and File I/O Mode (which includes its I/O kernels while bypassing its computational kernels). Each kernel’s operation is timed.

The Compute Mode corresponds to the traditional performance focus of the HPEC community. The System Mode can be used to measure both compute and storage I/O throughput, which is becoming increasingly important in HPEC systems. All the kernels and the I/O are designed to be run on parallel computing and parallel storage systems.


Reference

1. SAR code fragments donated by Mehrdad Soumekh (1999) were obtained as a free download through The MathWorks website.