• Thursday July 9th 12.30 p.m.

    venue: Psychology Labs, Kingston Lane Entrance,
    Uxbridge Campus, Brunel University.


    12.30 p.m. Registration, set up posters
    12.45 p.m. Picnic Lunch (inside if raining, outside if fine).

    1.45 - 2.45 p.m. keynote speaker:
    John Findlay (Durham University)
    "Visual attention: what's behind the spotlight?"

    2.45 - 3.15 p.m. Paul Hibbard (Surrey University)
    "Segmentation of motion information by binocular disparity"

    3.15 - 4.15 Tea + posters

    4.15-4.45 p.m. Noriko Yamagishi and Steve Anderson (Royal Holloway)
    "Localization of motion and colour stimuli in human peripheral vision."

    4.45 - 5.10 p.m. Michael Wright (Brunel University)
    "The relationship between the flicker motion aftereffect and velocity aftereffect"

    5.10 - 5.35 p.m. John Harris (Reading University).
    "Spatial information and the allocation of motion"

    5.35 - 6.35 p.m. Drinks and posters

    7 p.m. Participants are invited to join the guest speaker for dinner in a restaurant.

    We thank the Scientific Affairs Board of the British Psychological Society for financial support.

    Those wishing to attend should inform Prof. Michael Wright ( as soon as possible. Note to contrinutors: poster dimensions - 94x64cm landscape. 2 panels per poster.



    John Findlay,

    Department of Psychology, Durham University, Durham, U.K.

    A substantial body of work on visual attention has built on Posner's well known idea of an attentional spotlight. I shall argue that the spotlight metaphor is potentially misleading because it implicitly encourages the treatment of attention as a post-iconic process. Alternative theories proposed, such as the zoom lens theory, generally commit the same epistemological error.
    These notions will be elaborated by consideration of visual search. Treisman and Gelade proposed a theory a visual search in which covert visual attention played a key role. However this theory ignored both the inhomogeneity of visual processing (fovea and periphery), and the active use of eye movements. When these factors are taken into consideration it will be argued that the use of covert visual attention is not a helpful search strategy in any realistic visual search task.
    This leads to a shift in emphasis from the view that attention is a central cognitive process involved in analysing the retinal image to the alternative that attention in vision works through continual interplay of sensory and motor systems to select a rapid succession of retinal images for analysis.


    Paul Hibbard,

    Department of Psychology, University of Surrey, Guidford, Surrey, GU2 5XH

    Recent studies (Hibbard et al , Vis. Res., 1998; Snowden and Rossiter, Perception, 1998) have demonstrated that the perception of coherent global motion is facilitated if signal and noise components of a stimulus are segregated in depth on the basis of binocular disparity. In the current study, these findings were extended to investigate the effects of depth segregation on (i) motion correspondence and (ii) motion capture. The maximum displacement limit for discriminating the direction of motion of random dot kinematograms (Dmax) was measured for RDKs containing dots on either a single plane, or on two planes separated in depth. In the latter condition, the overall density of dots was held constant, while the number of dots on each of the two planes was varied. For a given dot density, Dmax was greater when the dots were distributed over two planes, and increased as the density on the plane containing the least number of dots decreased. This result suggests that the visual system makes use of information regarding the binocular disparity of elements in solving the motion correspondence problem. In a second experiment, the extent to which a moving low frequency sinewave captured the motion of a random dot pattern was assessed for stimuli in which the dot pattern was presented with the same, or with a different disparity to that of the grating. Capture was less evident when the two patterns were separated in depth. We conclude that the segmentation of stimuli on the basis of binocular disparity influences motion coherence thresholds, the maximum displacement limit for motion, and the strength of motion capture.


    Noriko Yamagishi & Stephen J. Anderson

    Department of Psychology, Royal Holloway, University of London, Egham, Surrey, TW20 0EX

    Purpose. There is evidence to suggest that the magnocellular pathway may provide the signal that allows the rapid detection and localization of objects in the peripheral visual field prior to foveation. If this is the case, localization (in space and/or time) for magnocellular-type stimuli (e.g. rapidly flickering luminance patterns) may be superior to that for parvocellular-type stimuli (e.g. isoluminant chromatic patterns). The aim of this study was to test this hypothesis.
    Method. Expt. 1: Spatial localization accuracy for eccentrically (7.5-12.5 deg) presented motion (0.5 c/deg, 10 Hz flickering luminance sinusoids) and colour (0.5 c/deg, stationary, isoluminant red/green modulated sinusoids) stimuli was measured using method of constant stimuli (n = 50 trials). Each stimulus was presented for 100 ms within a circular patch of 4 deg diameter at 0-6 dBs above its detection contrast threshold, which was determined separately using an interleaved staircase procedure. The stimulus location was randomised between trials and the subject's task was to indicate its perceived location by marking the screen monitor using a pen and their dominant hand. In some experiments, distractor targets (1-5 non-patterned luminance patches of 4 deg width) were presented 0-75 ms prior to the stimulus. Expt 2: Reaction time measures for the detection of motion and colour stimuli, presented at 0, 3 and 6 dBs above threshold, were also completed. Results were averaged from 200 trials.
    Results. There was no significant difference between the spatial localization accuracy of motion stimuli and colour stimuli (p>0.05). Both stimulus types could be localized to within 1.1 ± 0.1 deg, and this accuracy was not affected by the presence of distractor targets. However, reaction times were significantly faster for motion stimuli than colour stimuli (p < 0.05).
    Conclusion. Reaction time measures provide support for the hypothesis that the M pathway may provide the signal that allows the rapid detection of peripheral visual objects.


    Michael Wright,

    Department of Human Sciences, Brunel University, Uxbridge, Middlesex UB8 3PH

    Flicker motion aftereffects (FMAE) may be demonstrated, following adaptation to directional motion, using directionally-ambiguous test stimuli, typically counterphase gratings. FMAE has been contrasted with the motion aftereffect (MAE) obtained with a stationary test stimulus, and the suggestion has been made that FMAE is sensitive to second-order motion as well as first-order motion. The velocity aftereffect (VAE) is obtained with a directionally unambiguous moving test stimulus, and like FMAE and unlike MAE, VAE depends upon the velocity of the adapting stimulus rather than its temporal frequency. In this respect, FMAE resembles VAE and differs from MAE.
    A counterphase grating may be regarded as the sum of two drifting gratings of equal contrast and equal and opposite velocity. Might FMAE be understood simply as the sum of the VAE's on its components? Counterphase gratings were used as test stimuli for measuring the flicker motion aftereffect (FMAE) to adaptation to drifting gratings and plaids. The effect of the same adaptating stimuli were measured under comparable conditions on the drifting gratings of which the counterphase gratings were composed. A nulling method was used in which a set of constant-stimuli were constructed from complementary opposite-direction drifting gratings. Either the relative contrast or the relative velocity of the opposing gratings was varied to generate the set of constant-stimuli. The task for the subject was simply to report the predominant direction of motion of the test stimulus, which was either a single counterphase grating or a pair of opposed-motion drifting gratings.
    The results indicate that the FMAE is greater than the sum of its component VAE's. Moreover, after adaptation to symmetrical plaids drifting in the same direction as the test stimulus, VAE is tuned to the orientation of the plaid's components, whereas the tuning of FMAE was maximum for a plaid whose components were 45 deg from the test orientation. Adaptation of FMAE can occur when the plaid component orientation and test grating orientation are markedly different. It is concluded that FMAE unlike VAE is not strongly determined by first-order motion and appears more sensitive to higher-order motion than the VAE.


    John Harris

    Department of Psychology, University of Reading, Whiteknights, Reading RG6 6AL

    The visual system is faced with the problem of appropriately recovering the movement of different objects in the environment (seeing what it is that is in motion). This depends on relative motion in the retinal image, and on co-occurring eye movements, but also on the spatial characteristics of the display (as when a small stationary target on large slowly moving background appears to move against the static background). The role of spatial information in the allocation of motion was studied by measuring the strength of the motion aftereffect (with ratings of initial strength and timing of duration), following a standard adaptation regime, on a test window of previously moving vertical stationary stripes, as a function of their spatial relationship (offset) with stationary vertical surrounding stripes. It was found that MAEs were stronger and more long-lived when the test stripes were offset by about 90 deg than when they were perfectly aligned with the surrounding stripes, an effect which affected MAE strength more than surrounding the test window with a border. When test stripe offset was systematically varied, MAE strength and duration rose with test stripe offset up to 72 deg, the largest offset between 0 and 180 deg used in the experiment. Although the MAE was reduced when the offset was further increased to 180 deg (white stripes aligned with black), it was still higher at that offset than at 0 deg (white stripes aligned with white). In contrast, when subjects were asked to rate how separate the test field appeared from the surround in the absence of any prior adaptation to motion, ratings rose with offset up to 180 deg. When a range of offsets between 72 and 180 was tested, segregation ratings were again higher for 180 than for 90 deg, but peaked at 166 deg, whereas MAE strength peaked at 166 deg. Thus both MAE strength and judged segregation vary with the relationship between the test and surrounding stripes, but the former seems to depend on the size of offset or phase shift, whereas the latter seems to depend more on the length of black/white border at the edge of the test field. An explanation for the MAE effects is that motion signals from adapted motion detectors propagate along more or less colinear contours to induce motion into the surround, and so reduce relative motion between the test and surround stripes (and so the MAE). This may happen because, outside the laboratory, colinear edges are likely to belong to the same object, whose parts could be assumed to move together.



    Peter J Simpson, Mark F Bradshaw and Neil Stringer.

    Department of Psychology, University of Surrey, Guidford, Surrey, GU2 5XH

    Previous studies of the rôle of colour in object perception have typically used stylised drawings with blocks of fill-in colour stimuli and found that colour does not play a major part in object recognition or classification (e.g. Biederman and Ju, 1988; Cog Psychology, 20, 38-64; Ostergaard and Davidoff, 1985 JEP:LMC 11, 579-87). However stylised stimuli, presented on uniform backgrounds, greatly simplify the segmentation problem and the use of fill-in colour omits information about 3D form which may be involved in recognition. Therefore the influence of colour in these processes may have been underestimated. In the present study we investigated the role of colour further by using photo-realistic objects presented within natural scenes.Twenty subjects were required to decide whether an target object, initially presented for 1 second, was present or absent within a scene. The object within the scene could either have (i) same form and colour, (ii) same form, different colour, or (iii) different form, same colour as the target. The location of the object was either pre-defined for each trial (recognition-only task) or it appeared in one of eight locations (search-and-recognition task). Error scores and decision times were recorded. Decision times for the search-and-recognition task were longer than for the recognition-only task and the pattern of results for the three conditions differed between tasks. The findings suggest that colour plays a primary role in the location and recognition of objects.


    Robin Walker, J. Findlay, H. Deubel,

    Department of Psychology, Royal Holloway, Egham, Surrey, TW20 0EX

    The time taken to initiate a saccade (latency) may be increased under conditions in which a distractor is presented in the opposite visual field to the saccade target. We have termed this latency increase the 'remote distractor effect' (RDE). Conditions which produce the RDE are different from those used to demonstrate the 'global effect' amplitude modulation. In the global effect paradigm distractors are presented in the same hemifield as the target and there is no affect on saccade latency. We report the results from a series of studies in which distractors were presented at various 2D locations in both visual fields. Saccade latency was increased when distractors were presented in both visual fields with the greatest increase for distractors at central fixation. Saccade latency was not affected by distractors presented on the horizontal axis in the same hemifield as the target in which case the global effect was observed. For locations other than the horizontal axis the critical location for modulating amplitude appeared to be within +/- 15 degrees of the target location. Distractors presented outside this region increased latency but had no effect on amplitude. Furthermore, the latency increase observed was found to vary in a systematic way depending on the relationship between target and distractor eccentricities. Our data suggests that inhibitory processes which have been demonstrated in the rostral pole of the superior colliculus may operate over extended regions of the visual map.


    A D Parton, M F Bradshaw

    (Department of Psychology, University of Surrey, Guildford, Surrey, GU2 5XH, UK; fax: +44 1483 259 553; email:

    We previously reported that systematic distortions in the perception of depth can occur for physical stimuli under quasi-natural viewing conditions (Bradshaw et al, ECVP 1997). The stimuli comprised three points of light viewed in darkness. However surfaces viewed in well illuminated environments, which provide perspective information and higher order motion and/or disparity cues, can support near veridical performance (Durgin et al, 1995 JEP:HP&P 21 679-699). Here we investigate the role of surface texture information within our quasi-natural viewing paradigm. Observers set the angle between two adjustable lines, on a computer display, to match the angle between two planar surfaces hinged around a vertical axis. The surfaces comprised 25 LEDs which either formed (i) a regular pattern, with LEDs equally spaced within a 25cm square grid, or (ii) an irregular pattern with LEDs randomly arranged within the same area. Five test surfaces angles (60, 80, 90, 100 and 120 degrees) were presented at three viewing distances (150, 212 and 300 cm) and under three viewing conditions (monocular-static, monocular-moving and binocular-static). Results for the binocular condition were unbiased but imprecise for both texture patterns. However, in the monocular motion conditions settings were biased at oblique angles (100 and 120 degrees) and were improved by regular texture information. Judgements were not significantly affected by changes in viewing distance. We conclude that the addition of explicit surface information may improve performance under binocular viewing conditions by eliminating setting biases but there is still considerable variation in observers' individual settings. However, regular surface texture only further enhances the veridical perception where systematic biases persist and does improve the precision of observers' performance.


    Simon Watt, Mark Bradshaw, Paul Hibbard and Ian Davies

    Department of Psychology, University of Surrey, Guildford, Surrey, GU2 5XH, UK

    We examined the effects of object distance and size on reaching behaviour under monocular and binocular viewing. Servos et al (1992, Vision Research, 32, 1513-21) found that reaching under binocular viewing was more efficient, showing shorter onset times, higher peak velocities, and shorter movement times. Under both binocular and monocular viewing, kinematic indices of the reach (e.g. peak velocity, peak acceleration) were scaled by object distance. However they did not compensate for retinal size or retinal disparity when viewing distance was manipulated.
    In our experiment, subjects reached for and lifted solid rectangular objects placed at 30, 43 and 55 cm along the midline. We used a range of object sizes such that the projected binocular disparity could be equated at the three distances. Reaches were made in normal lighting conditions under monocular and binocular viewing. Subjects' heads were held stationary. A MacReflex Motion Analysis system was used to analyse movement parameters.
    Peak velocity of the wrist increased with increasing object distance under both monocular and binocular conditions. Peak velocity also increased with increasing object depth. Notably, this latter effect was significantly greater under binocular viewing than under monocular viewing. Movement onset times were slightly longer under monocular than under binocular viewing.
    Even when binocular disparity was controlled for, both object distance and object depth affected reach kinematics. There was a differential effect of object depth under binocular and monocular viewing conditions, and this is under current investigation.


    David Rose,

    Department of Psychology, Department of Psychology, University of Surrey, Guildford, Surrey, GU2 5XH, UK

    In the last century, researchers worked within a very different framework from the one we assume today. Descartes had divined the mind as an essentially aspatial entity, different from the mere matter of the brain. Kant had reasoned that knowledge is impossible unless we already have an inbuilt awareness of space within the mind. It was Lotze (1852) who responded to the problem of how the brain could tell the aspatial mind where something is. He adapted Müller's (1826) theory of 'specific nerve energies' to assert that 'local signs' on nerve cells could carry information from the brain to the mind about stimulus location. Hering (1864) even envisaged these as a set of Cartesian x-y coordinates built in to the visual system at the cellular level.
    Modern neural coding theory assumes that information can be carried in two ways: via the activity levels and patterns within each nerve cell (rate coding), and via which nerve cells in a given set are active. The latter notion however is ambiguous: information might be encoded implicitly by some cells being active and others not (place coding), or explicitly by the existence of particular qualities or essences possessed by each individual active cell (the nineteenth century notion of labelled lines). Some modern vision researchers seem to be confusing these ideas - and to be unclear about which levels of explanation they apply to (cellular, informational, subjective). It will be argued that a clarification is in order, and the validity of nineteenth century ideas is somewhat debatable.


    N.E.Scott-Samuel and A.T.Smith

    Department of Psychology, Royal Holloway, Egham, Surrey, TW20 0EX

    (1) Are there specific mechanisms sensitive to stereo-defined motion? (2) If so, do their properties resemble those of 1st-order or 2nd-order mechanisms? (1) We used a drifting, horizontal, 0.25 c/deg, missing-fundamental stimulus, defined purely in stereoscopic depth. The phase of the missing-fundamental was incremented in 90 deg steps at a rate of 16.7 Hz. The dynamic random binary noise used as a carrier was updated at the same frequency. The missing-fundamental waveform moved either upwards or downwards, and observers indicated perceived direction in a single-interval, binary-choice task with no feedback. The amplitude of the missing-fundamental waveform varied between 3.5 and 21.2 minarc disparity. (2) Observers indicated the orientation and direction of a drifting 0.5 c/deg sinusoid, defined only in depth, in a single-interval, two-binary-choice task with no feedback. (1) When performance was not at chance levels, motion was seen in the opposite direction to the phase shift of the missing fundamental sequence, implying that observers were following the (aliasing) largest Fourier component in the stimulus, and not the spatial features of the image. A control experiment showed that the 3f and 5f components of the missing fundamental were equally visible, eliminating the possibility that attenuation of higher spatial frequency components could explain the results. (2) Orientation thresholds (in terms of disparity) were lower than direction discrimination thresholds for the drifting sinusoid, as is the case for 2nd-order motion (Smith and Ledgeway, 1997, Vision Research, 37, 45-62) and colour-defined motion (Lindsey and Teller, 1990, Vision Research, 30, 1751-1761), but not for luminance gratings (Watson et al, 1980, Vision Research, 20, 341-347). The backwards motion of the missing fundamental stimulus implies that low-level mechanisms, sensitive to stereoscopic motion energy, mediate cyclopean motion perception. The difference in orientation and direction thresholds indicates that their properties more closely resemble those of 2nd-order than 1st-order mechanisms.


    N.E.Scott-Samuel and A.T.Smith

    Department of Psychology, Royal Holloway, Egham, Surrey, TW20 0EX

    Despite strong converging evidence that there are separate mechanisms for the processing of 1st-order and 2nd-order motion, the issue remains controversial. Here we present compelling new evidence for separate mechanisms, using a direction discrimination task performed on a composite motion stimulus. The motion sequence consisted of a dynamic binary noise carrier divided into horizontal strips of equal height, each of which was sinusoidally modulated (1.0 c/deg) in either contrast or luminance. The modulation moved leftward or rightward (3.75 Hz) in alternate strips. The single-interval task was to identify the direction of motion of a central, marked strip. Three conditions were tested: uniform 1 (all 1st-order strips), uniform 2 (all 2nd-order strips), and mixed (alternate 1st-order and 2nd-order strips, correlated with the direction alternation). The dynamic noise was refreshed at 15 Hz and the strip motion was sampled at 90 degree phase intervals. In preliminary experiments, the two uniform conditions were run with the strip height fixed. Performance fell as modulation depth decreased and the threshold modulation depth for each type was determined. These threshold values were used to scale modulations of the two types in the main experiment so as to equate visibility of the 1st-order and 2nd-order components. In this experiment strip height was manipulated with fixed modulation depth. The minimum strip height at which direction identification was possible was strikingly lower in the mixed condition than in the uniform conditions. Qian et al (1994, J.Neuroscience, 14, 7357-7366) have shown that 1st-order motion signals cancel if locally balanced. We argue that the two uniform conditions demonstrate local cancellation of motion signals, whereas in the mixed condition this does not occur. We attribute this non-cancellation to separate processing of 1st-order and 2nd-order motion inputs.


    Inka Steffens and Andy Smith,

    Department of Psychology, Royal Holloway, Egham, Surrey, TW20 0EX

    A random dot kinematogram in which individual dot directions are chosen at random from a range of (say) 90 deg appears to move in the direction corresponding to the mean of the dot directions (global motion). If half the dots are drawn from each of two distributions, both 90 deg wide but with means 180 deg apart, the display appears as two transparent motion surfaces. If the dots from the two distributions are spatially segregated, the display can be parsed into two adjacent global motion surfaces separated by a sharp motion-defined boundary. If the two distributions are made less distinct by widening them, there comes a point where the ability to parse them breaks down. We have examined whether giving the dots different contrasts improves observers' ability to parse the dots into two surfaces. Intuitively, it would be expected that this would be the case, but global motion models tend to be based simply on motion vectors in which case the contrast of the dots should be immaterial. We find that the latter is the case. For both spatial segregation and transparency, introducing a contrast difference between the two sets of dots has no effect on the critical direction distribution width at which parsing ability breaks down.


    Paul W. Roden,

    Department of Human Sciences, Brunel University, Uxbridge, Middlesex UB8 3PH

    Eleven right hemisphere stroke patients who performed poorly on cancellation tasks were routinely cued to begin their cancellation of targets in the upper left hand corner of the assessment page. Ten of these patients failed to derive any benefit from cueing, despite being able to comply with the instructions and despite the fact that the left start condition always followed the right start conition. Small reductions in error scores were noted for both right and left performance, but these failed to reach significance in group analyses. Only one patient benefited from cueing, detecting more targets in both hemifields. Notably, this patient was possibly the most impaired person in the group with respect to cancellation performance. While benefits from cueing have been reported for other tasks, especially line bisection and single target detection, the benefits from cueing in more complex tasks such as cancellation tasks may be limited to the initial instance of compliance, e.g. the first few targets detected.


    Mel Mays, John Harris

    Department of Psychology, University of Reading, Whiteknights, Reading RG6 6AL

    It has been reported that visual search for a Kanizsa square (in an array of distractors, generated from the same components, which do not create illusory surfaces) is parallel, so that search times are almost independent of the number of distractors. Although this result suggests that illusory surfaces are created pre-attentively, it is also consistent with the notion that some other property of the inducing corners (such as co-linearity) might underlie the effect. We compared visual search for a Kanizsa square, and two other targets in which the Kanizsa inducers were replaced by line corners, and by line corners with short orthogonal line segments added to the outside of the lines (‘finned corners’). The distractors in each case were formed from the same components as the target, but rotated through 180 degrees. Over three experimental sessions, performance improved on all three search tasks. However, even in the third session, the slopes of the search functions were shallower for the finned corners than for the line corners, and for the line corners than for the Kanizsa corners. Mean slopes (ms/item) for target present trials for the three target types were: Finned: 8.5; Line: 13; Kanizsa: 52. The data suggest that it is not the prior creation of illusory surfaces which leads to parallel search in these conditions, but some other feature, such as colinearity of inducers or global size differences between targets and distractors.


    Alison Lee, John Harris

    Department of Psychology, University of Reading, Whiteknights, Reading RG6 6AL

    A common symptom of Parkinson’s disease (PD) is a difficulty in continuing or initiating locomotion in confined spaces such as doorways. To test the idea that this difficulty involves a misperception of the dimensions of external space, PD sufferers and age-matched controls were presented with a schematic doorway on a large back-projection screen at two viewing distances (0.6 and 1.5 m). On each presentation, they were to imagine that they were on a narrow trolley passing through the centre of the doorway, and had to judge whether they would fit through it without rotating their shoulders, pressing one of two switches to signal their judgement to the computer generating the display. A staircase technique was used to find the aperture (doorway) width which gave 50% ‘yes’ judgements, and the ratio between this measure and the subject’s physical width at the shoulders calculated (the A/S ratio). This ratio was significantly higher in the patients than the controls, suggesting that the patients required the aperture width to be larger before judging that they would fit through it. The effect was larger for the shorter viewing distance, for doorways whose sides were surrounded by striped ‘wall-paper’, and for displays in which the doorway was dark on a light background. Although the general effect is consistent with normal visual perception but abnormal body-image in PD, the variations with viewing distance and visual characteristics of the display suggest that it may reflect an error of visual perception which leads to a compression of visually perceived extra personal space.