![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
1 of 7 | ![]() |
![]() |
![]() |
Robotics and Autonomous Systems | ||
Volume 32, Issue 4 | SummaryPlus | |
30 September 2000 | ![]() |
Article |
Pages 207-218 | Journal Format-PDF (504 K) |
PII: S0921-8890(99)00127-X
Copyright ©
2000 Elsevier Science B.V. All rights reserved.
Centering behavior with a mobile robot using monocular foveated vision
Eberhard-Karls-Universität Tübingen, Wilhelm-Schickard-Institut für
Informatik, Arbeitsbereich Rechnerarchitektur, Köstlinstraße 6, 72074 Tübingen,
Germany
Received 3 December 1997; revised 25 October 1999.
Communicated by F.C.A. Groen Available online 16 August 2000.
A system for corridor following based on properties of the human visual system is presented. The robot extracts image features using an interest operator to compute sparse optical flow induced by the translatory motion of the robot. The available status information from the robot is used to compensate for the known rotatory movement of the image. Control of the robot is done by transforming the optical flow to ego-motion complex log space. The difference between the median flow extracted from the left and right peripheral visual areas is used to control the heading of the robot. Forward velocity is controlled by trying to keep the perceived optical flow constant.
Author Keywords: Visual robot control; Centering behavior;
Complex log mapping
Control of a mobile robot may be achieved by constructing a complete three-dimensional model of the world, planning a path and then executing the required steps to move the robot along the path. Of course it is desirable to know as much as possible about the environment. However, this strategy requires much computing power which may be used for other, higher level tasks if a simple control strategy can be employed at the same time. Horswill [17] has pointed out that it is unnecessary to construct a three-dimensional model of the environment simply to reduce it to a single number such as heading inside a corridor.
Aloimonos [1] introduced purposive vision which only uses the data required to solve the particular task, thereby saving valuable computer resources. This makes it possible to close the loop and achieve vision during action [31]. Bekey [2] offered constructive proofs that biologically inspired robot control can succeed where traditional approaches are difficult or impossible. Like Sandini et al. [31] we are working in the field of visual robot control with an emphasis on vision during action. In this context much may be learnt by looking at how insect vision works [15 and 16]. Some of this knowledge might be transferable to the field of robot vision [12].
In particular, researchers have been inspired by insects like bees for robot control [6, 31, 32, 33 and 34]. Bees are unable to measure distances larger than a few centimeters using stereo vision because of the small distance between their eyes. However, bees are able to infer the range of an object from the perceived angular velocity provided that the ego-motion of the eye and the bearing of the object are known [36]. Bees are performing centering behavior in a corridor by balancing the speeds of the retinal images in the two eyes [36 and 37]. The higher the speed of the retinal image, the smaller the distance to the wall. Thus the bee is located exactly in the center of a corridor if the image speeds of the left and right eye are equal. Bees seem to be able to perform this centering behavior independent of the contrast or spatial-frequency content of gratings along the walls.
However, humans have a more advanced visual system which is different from the visual system of the bee. An introduction to the human visual system is given by Tovée [41]. In the human visual system the axis of fixation is facing forward with a rather small field of view [4]. The spatial resolution is much higher in the center (fovea) than in the periphery of the retina. Humans perceive the world as stationary even though the eye may be moving. Information about the desired eye movements is used to compensate for the motion in the image induced by the ego-motion of the eye [42]. If the eye is moved passively by using a finger or one tries to change the gaze with a mechanically fixated eye the world seems to move. Ego-motion of the head is determined by the vestibular apparatus [4]. Information from the vestibular apparatus could be used to improve the perceived image. In this respect our work has been more inspired by the human visual system than the visual system of the bee.
The paper is organized as follows. Section 2 reviews relevant research in the field of visual robot control in general and corridor following in particular. Section 3 reviews the complex logarithmic mapping whose properties are important for our work. Section 4 describes our control algorithm in detail. The computation of the ego-motion of the camera from the available status information supplied by our mobile robot is also described. Section 5 describes some of the experiments performed with our mobile robot. The paper ends with a conclusion in Section 6.
Much research has been done in the field of visual obstacle avoidance and navigation. In this section we briefly review some of these approaches. Research related to the task of corridor following using optical flow is discussed in more detail below.
Moravec [25] used a mobile robot with a sliding camera inspired by the motion of lizards to determine a three-dimensional map of interesting points in an environment. A path is planned though the environment and subsequently updated. The robot moves in discrete steps of 1m.
Brady and Wang [3] determine scene structure in the environment of a mobile robot by extracting corner points. For each point depth is computed using a stereo camera or a structure from motion approach. Brady and Wang suggest that it might not be necessary to explicitly compute depth. Useful information for robot control may be derived directly from optical flow, disparity or rate of change of disparity. Indeed, Tomasi and Kanade [40] have shown that it is possible to extract shape and motion from image sequences without calculating camera-centered depth.
Fossa et al. [11] developed a vision based navigation system using three cameras. Two cameras are used for obstacle detection and avoidance and the third camera is used for self-localization.
Franceschini et al. [12] developed an artificial compound eye with facets which was inspired by the fly's eye. Although control is done continuously the movement of the robot is separated into a purely translational step followed by a rotational step. Franceschini et al. use this type of motion to determine the translatory component of the optical flow.
Horswill [17] developed a simple, cheap and robust visual navigation system. Both the rotational and the translational velocity are under computer control. Corridor following is done by estimating the heading of the robot and calculating a measure for the distance to the left and right walls from visual data. Control of the robot is done in a closed loop except for turning at corners which is performed open-loop. Horswill emphasizes that only the required data needed for the task at hand has to be extracted from the visual data, e.g. it is not necessary to construct a complete 3D model of the environment just to achieve a corridor following behavior.
Crespi et al. [8] followed a memory-based approach to navigation inside a corridor. First, images labeled with known lateral displacement and heading of the robot inside the corridor are used during a learning phase. Lateral displacement is discretized into three values and heading into three or four values. During robot control a probability distribution is calculated for the classes which specifies the new translational and rotational velocity.
Sobey [38] developed a zig-zag motion strategy for a monocular robot to avoid collisions. First an orientation movement is performed to determine the distance to the objects 60° on both sides of the goal. All subsequent movements are made in a direction for which a safe distance to travel is known using a potential field method. A change of direction between 20° and 120° is made for every move which results in a zig-zag motion. Range values are computed only for a horizontal strip of the image. Sobey has used an action-perception-planning cycle which acquires 16 images for every translatory motion.
Vogelgesang et al. [43] developed an algorithm for depth perception from radial optical flow fields. They propose to use their algorithm with a mobile robot. However if the heading of the robot changes the algorithm has to be reset.
Jochem and Pomerleau [21] developed a very successful vision-based system for driving on outdoor roads which is also able to perform tactical maneuvers such as lane changing. The system is based on an artificial neural net trained by watching a human teacher.
Koecká [22] represented the
environment as a place graph and used visual servoing to move from one place to
the next. It is assumed that world coordinates of the landmarks used for
navigation are known.
Coombs and Roberts [6] used two cameras to achieve a centering behavior inspired by the honeybee. The cameras are located at an angle of 60° from the front of the robot. Coombs and Roberts determine gradient-parallel optical flow at full spatial resolution for each of the cameras. The maximum optical flow is determined by a histogram method. Both maxima are compared to calculate the desired heading of the robot. Coombs and Roberts note that this approach works reliable as long as the focus of expansion does not enter the image. The desired heading is saturated to prevent this from happening. The gaze of the camera is stabilized as the robot rotates due to the architecture of the robot. An alternate strategy proposed by Coombs and Roberts would be to subtract the optical flow, which is observed around the focus of expansion, from the peripheral optical flow.
Coombs et al. [5] use peripheral normal flow obtained from a camera with a wide lens to achieve centering behavior. Another camera with a narrow lens is used to estimate time-to-contact from flow divergence to detect imminent collisions. This information is used to turn the robot around in a "dead end". The cameras are rotationally stabilized by rotating them in the direction opposite to the rotation of the robot's body. A saccade is made if the camera orientation differs too much from the heading of the robot to align them again.
Santos-Victor et al. [33 and 34] equipped a mobile robot with two laterally pointing cameras to achieve a centering reflex also motivated by the behavior of honeybees. This basic centering reflex was extended with a sustaining mechanism which allows the robot to navigate whenever optical flow is only available from one of the cameras. Due to their camera setup Santos-Victor et al. assume that only horizontal optical flow can occur which simplifies calculation of optical flow. Average optical flow from the left and right camera is used to control the robot. Santos-Victor et al. derived constraints that have to be met to control the robot during rotations. By choosing the minimum translational velocity, the maximum rotational velocity and the setup of the cameras appropriately these constraints may be met. Santos-Victor et al. also note the possibility of suppressing robot control during times when the constraints are not met. Both possibilities worked successfully with their robot. They control the rotational velocity of the robot with a PID controller. Translational velocity is controlled by keeping the desired optical flow at a rate of 2pixels/frame.
Neven and Schöner [27] developed a behavior based vision-guided approach to homing and obstacle avoidance. Time to contact is estimated from optical flow which is used to control heading direction and velocity. In the absence of reliable visual information the robot uses information from dead reckoning. Neven and Schöner compute optical flow and time to contact from coarsely sampled images with two cameras (left and right). Neven and Schöner separate translatory and rotatory motions of the robot. Both forward and angular velocity are controlled by the estimated time to contact information. Time to contact estimates close to the center of the image are excluded because the peripheral information is more reliable. Small rotatory motions that may occur even if the robot is supposed to perform only a translatory motion along the optical axis are removed by subtracting the average flow along the horizontal and vertical meridian.
The approach presented here works with a single camera with the focus of expansion in view. Data is gathered in a purposive way. Control is done continuously in a closed loop. Our approach differs from the previous ones by Coombs and Roberts, Coombs et al., Santos-Victor et al., and Neven and Schöner. Information about the ego-motion of the camera obtained from the robot's status information is used to compensate for the rotatory motion of the camera. By using sensed velocities instead of the desired velocities a separate algorithm may be used to control the gaze of the camera. Obviously a centering behavior cannot be achieved with one camera pointing sideways. Nevertheless it should be possible to achieve centering behavior as long as both walls are visible and sufficient visual information is present in the visual areas which are used to control the robot.
Since we are working with a camera with the focus of expansion in view we use the complex logarithmic mapping to transform the optical flow calculated in image coordinates to complex log space. This transformation simplifies the comparison of radial optical flow. The complex logarithmic mapping has already been extensively studied by several researchers and has been used in a number of different areas such as character recognition [28], template matching [44], extraction of moving objects [13 and 18], motion stereo [19] and object detection and centering [29]. Complex logarithmic mapping is pseudo-invariant to size, rotation, and projection scaling [35]. For a fixated object, changes of size or rotation lead to a linear shift of an invariant image. The complex logarithmic mapping performs a conformal mapping of the log polar plane to a Cartesian plane [39]. The ego-motion complex logarithmic mapping is defined as [20]:
(1) |
(2) |
An image transformed with this mapping is shown in Fig. 1. The ego-motion complex logarithmic mapping differs from the complex logarithmic mapping in that it is not taken about the origin. Instead it is taken about the focus of expansion (xFOE,yFOE). For a forward moving observer stationary objects in the field of view only produce horizontal flow in ego-motion complex log space [18]. Jain et al. [19] have shown that (assuming a focal length f=1):
(3) |
Fig. 1. Image, image in complex log space and its inverse.
Thus the optical flow in ego-motion complex log space is a direct measure for the distance Z to the objects in view for a known translatory motion.
Tistarelli and Sandini [39] also analyzed the polar and log polar mapping and determined several advantages that may be gained by using a polar or log polar mapping. In addition to the already mentioned invariance property to scaling and rotations the complex log mapping also performs a data reduction due to the non-uniform sampling of the image. Motion equations which relate ego-motion to the optical flow are simplified and time to impact may be easily calculated using a complex log mapping.
Schwartz [35] has shown that the retino-striate mappings of the rhesus, squirrel and owl monkey may be described by a complex logarithmic mapping. With the direction of the optical axis facing forward humans could be using the complex logarithmic mapping to achieve obstacle avoidance or centering behavior. We are not suggesting that human centering behavior is actually achieved the way we constructed our algorithm. We are investigating if it is indeed possible to achieve centering behavior with monocular vision and the focus of expansion directly in view for robot control using the complex logarithmic mapping.
The control algorithm is divided into three parts. First relevant information is extracted from the images by locating interesting points with an interest operator. These points are then used to compute a sparse optical flow field which is transformed into complex log space. An image with a sparse optical flow field in image coordinates is shown in Fig. 2. Finally, the difference between the left and right optical flows is used to control the robot. The input to the algorithm consists of an image I(t2) taken at time t2. The algorithm works on two images at any given time. The data from the previous iteration of the control algorithm, image I(t1), is stored internally.
Fig. 2. Image, image with sparse optical flow, previous image compensated for rotatory camera movement, difference picture between image and previous image compensated for rotatory camera movement.
Interesting points are located in the input images which simplify the computation of optical flow. Let the interesting points of image I(t2) be F(t2). Because of the aperture problem [20] dense optical flow describing the actual motion is difficult to compute. Most methods compute optical flow in an iterative way (e.g. Horn and Schunck [14]). Due to temporal Gaussian smoothing, several images are often needed [34] which introduces an additional delay because optical flow is only computed for the center image. Santos-Victor et al. [34] therefore separate image acquisition and evaluation by grabbing a set of five images at video rate and then analyzing the data. In contrast to the work of Santos-Victor et al. we only grab one image for each iteration of the control algorithm.
To detect interesting points in the image, we are using the Moravec interest operator [24 and 25]. It is a simple operator which computes the local maxima of the minimum variance of local image intensities in the horizontal, vertical and both diagonals. Computation of optical flow for a set of interesting points is easier than the computation of full optical flow due to the aperture problem [20]. The type of features relevant to control the robot obviously depends on the environment of the robot. Using genetic programming [23] (a class of evolutionary algorithms which can be used to evolve variable sized, hierarchical individuals) we are evolving interest operators which are optimal according to some measure. This would make the first stage of the algorithm, extraction of interesting points for the calculation of optical flow, adaptive. We previously evolved edge and interest operators [9 and 10] by directly incorporating the output of the desired operator into the fitness calculation and are now working towards adaptive feature extraction where only the desired qualities of the operators are specified.
The ego-motion complex log mapping is only defined for a translatory movement
of the observer. However to control an arbitrary moving mobile robot we
compensate for any rotatory movements of the camera. A difference in image
between the current image and the previous image, compensated for rotatory
movements of the camera is shown in Fig. 2. This image shows
how accurate the compensation of rotatory ego-motion is performed. We compensate
for the rotatory movement of the camera by transforming the image to subtract
this component from the optical flow. This leaves only the translatory component
of the optical flow which is relevant to the centering behavior. We are using
the robot's status information (translational and rotational velocity, pan and
tilt angles) to predict the image at time t2 from Image I(t1)
due to the rotatory motion of the camera. It is assumed that the robot moves
exactly as is derived from the status information. Indeed, the human visual
system makes a similar assumption as has already been described above.
In the following text we are using the notation of Craig [7] to describe coordinate transformations. Let C(t1)P=[X(t1),Y(t1),Z(t1),1] be a point in the camera frame {C(t1)} at time t1. Then the coordinates of this point are C(t2)P=[X(t2),Y(t2),Z(t2),1] in the camera frame {C(t2)} at time t2. Let C(t2)C(t1)T be the homogeneous transformation from the camera frame {C(t1)} to the camera frame {C(t2)} (Fig. 3). Thus we have
C(t2)P=C(t2)C(t1)T·C(t1)P, | (4) |
(5) |
Fig. 3. Movement of the camera.
Using perspective projection x(t)=f(X(t)/Z(t)) and y(t)=f(Y(t)/Z(t)) with focal length f gives the following transformation for every image point for a rotatory camera motion [26]:
(6) |
(7) |
Sparse optical flow is computed by matching image intensities in a local area
around the interesting points. We are matching interesting points
F(t2) from the new image at time t2
with the interesting points from the image at time t2 which has been compensated
for camera rotations. The search is constrained to those points that radiate
outward. The quality of the match (squared differences in a local area around
the points) is scaled with the distance of the match. Thus preference is given
to a close match. After the interesting points in the predicted image
and the actual image I(t2) have been matched,
sparse optical flow has been determined in the original image.
The optical flow radiates outward from the focus of expansion for a forward motion of the camera. Provided that peripheral optical flow is available from the left and right sides, these flow vectors may be used to achieve a centering behavior with the focus of expansion in view. The focus of expansion is calculated from the status information of the robot:
(8) |
The sparse optical flow is then transformed to complex log space according to
(9) |
(10) |
where r and are the
coordinates in ego-motion complex log space and rsrc the
maximum radius in coordinates of the source image. The number of pixels in
complex log space used for the radial direction is specified by
rmax and
max
is the number of pixels used for the angular direction. Thus we have
rmin
r
rmax
and 0
max
with
Optical flow is computed first for the acquired image and then transformed to complex log space because the untransformed images have a much larger resolution than the transformed ones. Of course optical flow could be computed directly for the already transformed images. However due to the reduced image size, the number of interesting points that can be extracted from the transformed image is reduced. We experimented with several different methods, e.g. extraction of features from the transformed image, calculation of full optical flow for the original images and the transformed images. The approach described here was found to be the one that worked best.
We only use the optical flow located in the area lmin
lmax,
rmin
rmax
and rmin'
r
rmax
as shown in Fig. 4. For our experiments described below we
chose the values
lmin=45°,
lmax=135°,
rmin=225°
and
rmax=315°.
This effectively excludes the optical flow from the ceiling and the floor which
cannot be used to control the robot. Also we excluded the area directly around
the focus of expansion since the optical flow is usually small in this areas and
might be further disturbed due to incorrect compensation for the rotatory motion
of the camera. Since it is impossible to determine exact data from sensor
information we exclude this area because its information is too unreliable. For
our experiments we have chosen
rmin'=0.75(rmax-rmin)+rmin.
Let fl and fr be the median of the optical
flow extracted from the left and right peripheral areas, respectively.
Fig. 4. Areas of complex logarithmic space used to control the robot.
The homogeneous transformation of the camera movement may be calculated directly from the status information of the robot. Let {C} be the camera coordinate system and {R} the coordinate system of the robot base. The coordinate frames for our camera setup are shown in Fig. 5. We split the camera transformation into three parts. The first describes the location of the camera relative to the robot's base at t2, the next describes the movement of the robot and the last describes the location of the camera relative to the robot at time t1. Then the homogeneous transform C(t2)C(t1)T which transforms coordinates in the camera frame {C(t1)} to coordinates in the camera frame {C(t2)} may be calculated as
C(t2)C(t1)T=C(t2)R(t2)T·R(t2)R(t1)T·R(t1)C(t1)T, | (11) |
where C(t2)R(t2)T is the transformation from the coordinate system of the robot {R(t2)} to the coordinate system of the camera {C(t2)} at time t2,R(t2)R(t1)T describes the robot movement and R(t1)C(t1)T transforms the coordinate system of the camera {C(t1)} to the coordinate system of the robot {R(t1)} at time t1. R(t2)R(t1)T is calculated from the status information of the robot as
R(t2)R(t1)T=RZ(-![]() |
(12) |
where DX describes the translatory
motion of the robot along the X-axis with translational velocity v
and RZ the rotatory motion of the
robot where is the
rotational velocity. The matrices relating the camera coordinate system to the
base coordinate system are computed using standard manipulator kinematics.
Fig. 5. Colin, a Real World Interface B21 robot and the camera model with attached frames.
A PID controller [7] was implemented to control the rotational velocity of the robot. The gains have been determined experimentally and were set to kp=5.0,ki=0.1 and kd=0.005 during the experiments described here. The control law is given by
(13) |
where e(t)=fl-fr is the
difference between the extracted flow on the left and right peripheral visual
areas in complex log space and d
specifies the desired rotational velocity of the robot. In case no interesting
points can be determined on one side, the robot moves towards the other side
with a constant velocity. This behavior is inspired by the work of Sobey [38]. It is safer to move into a direction for which the
distance to the nearest obstacle is known. If the robot turns to the other wall
it will be repelled by it as soon as some of the interesting points of the other
wall enter the peripheral area.
The translational velocity vd of the robot is set according to the control law
(14) |
where vmin and vmax are the minimum and maximum translational velocities, fd the desired radial optical flow in the original image and fa is computed from the maximum of fl and fr which is transformed back into the original image. The control law could also have been formulated by transforming the desired optical flow to complex log space. This control law tries to keep the optical flow constant at about fd by slowing the robot down as it approaches a wall. Speed of the robot is increased again as the actual optical flow decreases. Santos-Victor et al. [34] also used velocity control to keep the measured optical flow constant using a sigmoid function for the velocity-control loop.
A distributed client¯server architecture is used to control the robot. Therefore to obtain accurate status information from the robot a delay of 150ms is used after a change of velocity and the old image is discarded. This allows us to achieve accurate compensation of the rotatory motion of the camera. The PID controller produces a smooth trajectory with only small rotations. Therefore we used a much simpler control strategy for most of our experiments. This strategy turns the robot to the left with constant velocity if the optical flow in the peripheral area is larger on the right side than on the left and to the right if the optical flow on the left is larger than the flow on the right. Only in the rare case where both flows would be exactly equal we stop the rotatory movement. However this almost never happens. This control strategy produces an oscillatory behavior where the robot is constantly turning to one side or the other. Because we want to perform robot control during rotatory motions this is a suitable strategy to test our robot.
For our experiments we used Colin, a Real World Interface B21 mobile robot
(Fig. 5) with a Directed Perception pan-tilt unit. We only
used one camera for the experiments described here. The camera setup can be seen
in Fig. 5. Images were acquired with a resolution of 128×128
pixels and the transformation to complex log space was done with
rmax=32,rsrc=64 and max=32.
The experiments were done in the corridor at our lab. The width of the corridor
was approximately 1.41m. The environment was not modified by attaching
structured wall paper which would have simplified the computation of optical
flow. However the environment did contain several posters mounted on the walls
along the corridor.
With this setup we conducted a series of experiments with the simple controller and with the PID controller. During the experiments with the simple controller the robot was moving with a constant forward velocity of 40cm/s. During the experiments with the PID controller minimum translational velocity vmin was set to 20cm/s and the maximum translational velocity vmax was set to 55cm/s. The desired optical flow in the original image was set to 3 pixels. Fig. 6 shows the results for the simple controller and for the PID controller. We recorded the path travelled by the robot using the robot's odometry data. The images shown are a subset of the same images which were used to control the robot during the runs.
Fig. 6. Path of the robot recorded from odometry data for different starting positions. The two on the left were recorded using the simple controller and the two on the right were recorded with the PID controller. The images along the path are a subset of the images that were recorded during execution of the algorithm. The path is overlayed on a map of the environment.
As can be seen in the images after a distance of approximately 11m the corridor is 0.35m wider than the remainder of the corridor. This part of the corridor is approximately 3.21m long. Just before this part of the corridor there is a staircase located on the right side of the corridor. In these areas the path of the robot is slightly offset to the right side of the corridor. The path recorded with the PID controller is much smoother than the one using the simple controller. During the runs which started in the middle of the corridor 126 images (1.77 frames per second) were recorded with the simple controller and 125 images (1.73 frames per second) with the PID controller. The robot travelled 27.1m respectively 26.9m during the runs. Distance was measured using the robot's odometry.
We have demonstrated that centering behavior may also be achieved with a
single camera with the focus of expansion in view. Comparison of the optical
flow for the left and right peripheral areas is done in ego-motion complex log
space. Since only the optical flow due to translation of the robot may be used
for the centering behavior we compensate for the image movement which is due to
the rotatory component of the camera motion. To compensate for the image
movement we use the status information which is directly available from the
robot. Using the available status information may speed up robot control by
eliminating the need to derive the ego-motion of the robot from the images.
This work was supported in part by a scholarship according to the Landesgraduiertenförderungsgesetz to Marc Ebner.
For image processing we have used the Vista software environment [30].
1. Y. Aloimonos (Ed.), Active Perception, Lawrence Erlbaum, Hillsdale, NJ, 1993.
2. G.A. Bekey, Biologically inspired control of autonomous robots. Robotics and Autonomous Systems 18 (1996), pp. 21¯31. Abstract | Journal Format-PDF (667 K)
3. M. Brady, H. Wang, Vision for mobile robots, Philosophical Transactions of the Royal Society of London, Series B 337 (1992) 341¯350.
4. R.H.S. Carpenter, Movements of the Eyes, 2nd edn., Pion Limited, London, 1988.
5. D. Coombs, M. Herman, T.-H. Hong and M. Nashman, Real-time obstacle avoidance using central flow divergence, and peripheral flow. IEEE Transactions on Robotics and Automation 14 1 (1998), pp. 49¯59. Abstract
6. D. Coombs, K. Roberts, "Bee-bot": Using peripheral optical flow to avoid obstacles, in: D. Casasent (Ed.), Intelligent Robots and Computer Vision XI, Proceedings of the Society of Photo-Optical Instrumentation Engineers, 1992, pp. 714¯721.
7. J.J. Craig, Introduction to Robotics: Mechanics and Control, 2nd edn., Addision-Wesley, Reading, MA, 1989.
8. B. Crespi, C. Furlanello and L. Stringa, A memory-based approach to navigation. Biological Cybernetics 69 (1993), pp. 385¯393. INSPEC
9. M. Ebner, On the evolution of edge detectors for robot vision using genetic programming, in: H.-M. Groß(Ed.), Workshop SOAVE'97 Selbstorganisation von Adaptivem Verhalten, VDI, Düsseldorf, 1997, pp. 127¯134.
10. M. Eber, On the evolution of interest operators using genetic programming, in: R. Poli, W.B. Langdon, M. Schoenauer, T. Fogarty, W. Banzhaf (Eds.), Late Breaking Papers at EuroGP'98: The First European Workshop on Genetic Programming, Paris, France, April 1998, pp. 6¯10.
11. M. Fossa, E. Grosso, F. Ferrari, M. Magrassi, G. Sandini, M. Zapendouski, A visually guided mobile robot acting in indoor environments, in: Proceedings of the IEEE Workshop on Applications of Computer Vision, Palm Springs, CA, IEEE, New York, 1992, pp. 308¯316.
12. N. Franceschini, J.M. Pichon, C. Blanes, From insect vision to robot vision, Philosophical Transactions of the Royal Society of London, Series B 337 (1992) 283¯294.
13. J. Frazier, R. Nevatia, Detecting moving objects from a moving platform, in: Proceedings of the DARPA Image Understanding Workshop, Pittsburgh, PA, 1990, pp. 348¯355.
14. B.K.P. Horn and B.G. Schunck, Determining optical flow. Artificial Intelligence 17 (1981), pp. 185¯203. INSPEC Compendex Geobase
15. G.A. Horridge, A theory of insect vision: Velocity parallax, Proceedings of the Royal Society, London, Series B 229 (1981) 13¯27.
16. G.A. Horridge, What can engineers learn from insect vision? Philosophical Transactions of the Royal Society of London, Series B 337 (1992) 271¯282.
17. I. Horswill, A simple cheap, and robust visual navigation system, in: J.-A. Meyer, H.L. Roitblat, S.W. Wilson (Eds.), From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior, Honolulu, Hawaii, 1992, MIT Press, Cambridge, MA, 1993, pp. 129¯136.
18. R. Jain, Segmentation of frame sequences obtained by a moving observer. IEEE Transactions on Pattern Analysis and Machine Intelligence 6 5 (1987), pp. 624¯629.
19. R. Jain, S.L. Bartlett and N. O'Brian, Motion stereo using ego-motion complex logarithmic mapping. IEEE Transactions on Pattern Analysis and Machine Intelligence 9 3 (1987), pp. 356¯369. INSPEC Compendex
20. R. Jain, R. Kasturi, B.G. Schunck, Machine Vision, McGraw-Hill, New York, 1995.
21. T. Jochem and D. Pomerleau, Life in the fast lane: The evolution of an adaptive vehicle control system. Artificial Intelligence Magazine 17 2 (1996), pp. 11¯50. INSPEC Compendex
22. J. Koecká, Visually guided navigation. Robotics and Autonomous
Systems 21 (1997), pp. 37¯50. Abstract
| Journal
Format-PDF (977 K)
23. J.R. Koza, Genetic Programming, On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, MA, 1992.
24. H.P. Moravec, Towards automatic visual obstacle avoidance, in: Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Cambridge, MA, 1977, p. 584.
25. H.P. Moravec, Ostacle Avoidance and Navigation in the Real World by a Seeing Robot Rover, Ph.D. Thesis, Computer Science Department, Stanford University, No. STAN-CS-80-813 and AIM-340, September 1980.
26. D. Murray and A. Basu, Motion tracking with an active camera. IEEE Transactions on Pattern Analysis and Machine Intelligence 16 5 (1994), pp. 449¯459. Abstract
27. H. Neven and G. Schöner, Dynamics parametrically controlled by image correlations organize robot navigation. Biological Cybernetics 75 (1996), pp. 293¯307. CrossRef
28. P.-W. Ong, R.S. Wallace, E.L. Schwartz, Space-variant optical character recognition, in: Proceedings of the 11th International Conference on Pattern Recognition, IEEE, New York, 1992, pp. 504¯507.
29. R.A. Peters II and M. Bishay, Centering peripheral features in an indoor environment using a binocular log-polar 4 dof camera head. Robotics and Autonomous Systems 18 (1996), pp. 271¯281.
30. A.R. Pope, D.G. Lowe, Vista: A software environment for computer vision research, in: Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, New York, 1994, pp. 768¯772.
31. G. Sandini, F. Gandolfo, E. Grosso, M. Tistarelli, Vision during action, in: Y. Aloimonos (Ed.), Active Perception, Lawrence Erlbaum, Hillsdale, NJ, 1993, pp. 151¯190.
32. J. Santos-Victor and G. Sandini, Embedded visual behaviors for navigation. Robotics and Autonomous Systems 19 (1997), pp. 299¯313. Abstract | Journal Format-PDF (857 K)
33. J. Santos-Victor, G. Sandini, F. Curotto, S. Garibaldi, Divergent stereo for robot navigation: learning from bees, in: Proceedings of Computer Vision and Pattern Recognition, New York, 1993, pp. 434¯439.
34. J. Santos-Victor, G. Sandini, F. Curotto and S. Garibaldi, Divergent stereo in autonomous navigation: From bees to robots. International Journal of Computer Vision 14 (1995), pp. 159¯177. Abstract
35. E.L. Schwartz, Computational anatomy and functional architecture of striate cortex: A spatial mapping approach to perceptual coding. Vision Research 20 (1980), pp. 645¯669. INSPEC EMBASE
36. M.V. Srinivasan, Distance perception in insects. Current Directions in Psychological Science 1 1 (1992), pp. 22¯26.
37. M.V. Srinivasan, How bees exploit optic flow: Behavioural experiments and neural models, Philosophical Transactions of the Royal Society of London, Series B 337 (1992) 253¯259.
38. P.J. Sobey, Active navigation with a monocular robot. Biological Cybernetics 71 (1994), pp. 433¯440. INSPEC Compendex | CrossRef
39. M. Tistarelli and G. Sandini, On the advantages of polar and log-polar mapping for direct estimation of time-to-impact from optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence 15 4 (1993), pp. 401¯410. INSPEC
40. C. Tomasi, T. Kanade, Factoring image sequence into shape and motion, in: Proceedings of the IEEE Workshop on Visual Motion, Nassau Inn, Princeton, NJ, 7¯9 October 1991, IEEE Computer Society Press, Silver Spring, MD, 1991, pp. 21¯28.
41. M.J. Tovée, An Introduction to the Visual System, Cambridge University Press, Cambridge, 1996.
42. E. von Holst and H. Mittelstaedt, Das Reafferenzprinzip Wechselwirkung zwischen Zentralnervensystem und Peripherie. Die Naturwissenschaften 37 20 (1950), pp. 464¯476.
43. J. Vogelgesang, A. Cozzi, F. Wörgötter, A parallel algorithm for depth perception from radical optical flow fields, in: C. von der Malsburg, J.C. Vorbrüggen, W. von Seelen, B. Sendhoff (Eds.), Artificial Neural Networks: Sixth International Conferene, Proceedings of ICANN 1996, Springer, Berlin, 1996, pp. 721¯725.
44. R.S. Wallace, P.-W. Ong, B.B.
Bederson and E.L. Schwartz, Space variant image processing. International
Journal of Computer Vision 13 1 (1994), pp.
71¯90. Abstract
Corresponding author. Present address: Universität Würzburg, Lehrstuhl für
Informatik II, Programmiersprachen und Programmiermethodik, Am Hubland, 97074
Würzburg, Germany; email: ebner@informatik.uni-wuerzburg.de
Ebner: Marc Ebner was born in Stuttgart, Germany, in 1969. He
received the M.S. degree in Computer Science from New York University, NY, in
1994, Dipl.-Inform. from the Universität Stuttgart, Germany, in 1996 and Dr.
rer. nat. from the Universität Tübingen, Germany, in 1999. At present he is a
research assistant at the Universität Würzburg, Germany. His research interests
include biologically inspired systems, evolutionary algorithms in computer
vision, evolutionary robotics, evolutionary algorithms, computer vision and
robotics.
Zell: Andreas Zell received the Diploma in Computer Science
from the University of Kaiserslautern, Germany, in 1986, M.S. degree from
Stanford University in 1987, Ph.D. in 1989 and the Habilitation (venia legendi)
in 1994 from the University of Stuttgart, Germany, all in computer science. From
1990¯1995 he has been an Assistant Professor at the University of
Stuttgart, Institute for Parallel and Distributed High-Performance Systems. In
1991 he won the German University Software Prize for the development of the
Stuttgart Neural Network Simulator (SNNS) and received an International MasPar
Challenge Prize in 1992. Since 1995 he has been appointed as Professor to the
Chair of Computer Architecture at the University of Tübingen,
Wilhelm-Schickard-Institute for Computer Science. His research interests include
artificial neural networks, evolutionary algorithms, artificial life, mobile
robotics and bioinformatics. He directs the Mobile Robotics Lab at the
University of Tübingen whose team was a finalist in the 1998 RoboCup middle size
competition at Paris. He participates in a number of joint research projects
with industry. He also coordinates the new bioinformatics curriculum at the
University of Tübingen.
Robotics and Autonomous Systems | ![]() |
SummaryPlus
Article Journal Format-PDF (504 K) |
Volume 32, Issue 4 | ||
30 September 2000 | ||
Pages 207-218 |
1 of 7 | ![]() ![]() ![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Send feedback to ScienceDirect
Software and compilation © 2000
ScienceDirect. All rights reserved.
ScienceDirect® is an Elsevier Science
B.V. registered trademark.