The primate
retina performs nonlinear "image" data reduction while
providing a compromise between high resolution where
needed, a wide field-of-view, and small output image
size. For autonomous robotics, this compromise is
useful for developing vision systems with adequate
response times. This paper reviews the two classes
of models of retino-cortical data reduction used in
hardware implementations. The first class reproduces
the retina to cortex mapping based on conformal
mapping functions. The pixel intensities are averaged
in groups called receptive fields (RF's) which
are nonoverlapping and the averaging performed is
uniform. As is the case in the retina, the size of
the RF's increases with distance from the center of
the sensor. Implementations using this class of
models are reported to run at video rates (30 frames
per second). The second class of models reproduce,
in addition to the variable-resolution
retino-cortical mapping, the overlap feature of
receptive fields of retinal ganglion cells.
Achieving data reduction with this class of models is
more computationally expensive due to the RF
overlap. However, an implementation using such a
model running at a minimum of 10 frames per second
has been recently proposed. In addition to
biological consistency, models with overlapping
fields permit the simple selection of a variety of
RF computational masks. Copyright 1998 Academic
Press.
|