Cognitive Psychology

by Eamon Fulcher

Chapter 2: Perception

BOOK CONTENTS

       

 

Chapter summary

Perception mostly concerns how we make sense of our visual world. Since our contact with the world is through our senses, the question which arises is whether we see ‘reality' or whether what we see is guided by expectation. Studies on visual illusions make it clear that we often make mistakes when viewing our environment. This chapter focuses on these issues as well as the influence of the environment in the development of the visual system.

Visual processing

In this first section you are introduced to basic visual processes such as visual pathways from the eye to the cortex, sensory adaptation and the processing of colour.

Pathways from the eye to the cortex

•  The eye

The wavelength of visible light ranges from 380 to 760 nanometers (nm) and different wavelengths are perceived as different colours (e.g. 380 nm looks violet and 760 nm looks red). All other wavelengths are invisible to the eye, such as those corresponding to ultraviolet radiation, X-rays, gamma rays, and TV and radio waves. The lens lies behind the iris and causes images to be focused on the inner surface of the eye known as the retina. The shape of the lens is altered by muscles to obtain a focused image of either nearby or distant objects (a process known as accommodation).

•  The retina

The retina performs the sensory functions of the eye and consists of over 130 million photoreceptors (specialised neurons that convert light into neural activity). Information from photoreceptors is transmitted along the optic nerve which travels to the brain. The retina consists of two general types of photoreceptors: 125 million rods and 6 million cones. Rods function mainly in dim light, cones function in clear light and respond to colour. The fovea, a small pit in the back of the retina about 1 millimetre in diameter, contains only cones and is responsible for our most detailed vision (the point at which we are looking). Farther away from the fovea, the number of cones decreases and the number of rods increase.

•  Visual pathways

The optic nerve projects to the lateral geniculate nucleus, a structure that is involved in early processing of movement, colour, fine texture and objects. Neurons then project to the primary visual cortex or V1, which appears to carry out further processing of motion, colour, location and orientation of objects. In terms of the function of the remainder of the visual cortex, the most popular theory is that different parts then become specialised for different visual functions (Zeki, 1992).

Sensory adaptation

The phenomenon of sensory adaptation can be illustrated by what happens when someone turns the light out suddenly. The momentary blindness is caused by the action of photo pigments which are two molecules that exist in the photoreceptors on the retina. They react to light by becoming bleached and this action stimulates the photoreceptors. As photo pigments become bleached they need to be regenerated before they can respond again to light. When high levels of light strike the retina, the rate of regeneration falls behind the rate of bleaching. With only a small percentage of the photo pigments available to respond to light, the rods become insensitive to light. If you enter a dark room after being in a brightly lit room, there are too few photo pigments ready to be stimulated by dim light. However, after a while the regeneration of photo pigments overcomes the rate of bleaching, and at this point the eye becomes adapted to darkness.

The perception of colour

Young-Helmholtz theory

The Young-Helmholtz theory was inspired by the observation that varying the amounts of red, blue and green can produce any colour. They speculated that the retina might consist of three different types of colour-detecting cells, each sensitive to red, blue or green wavelengths of light. They further speculated that different rates of firing of these cells gives rise to the perception of different colours.

There are two phenomena that this theory cannot explain easily:

•  Colour blindness

This occurs when a person is unable to distinguish between at least two certain wavelengths of light (e.g. shades of red from shades of green). In some cases there is no perception of colour at all. It is difficult to see how a theory based on different types of cone cells for red, blue and green could account for colour blindness.

•  Negative after-effects

Stare at a red patch for a couple of minutes or so, and then look at a white sheet of paper. An after-effect will appear in the form of green patch of colour. After-effects reflect the opposite of the stimulus to which they have been exposed (the opposite of the pairs red/green, blue/yellow or light/dark). The Young- Helmholtz theory cannot explain after-effects.

Opponent-process theory

An alternative theory of colour perception was suggested by Hering (1878–1964) in an attempt to explain colour blindness and negative after-effects. It is based on the idea that three types of cells in the retina respond to pairs of opposite colours: red/green, blue/ yellow and light/dark. In its anabolic phase, a cell processes one colour of the pair it is responsive to and processes the opposite colour in its catabolic phase. Negative after effects can be explained by assuming that cells become fatigued by prolonged stimulation of the same colour and that they will work in the opposite way as they recover. De Valois, Abramov and Jacobs (1966) provide support for the opponent-process theory.

They found bipolar cells in the second layer of the retina and also in the thalamus. However, MacNichol (1986) found three different types of cells in the retina that respond maximally to one of the three different wavelengths of light as predicted by the Young- Helmholtz theory. It seems then that processes described by both theories are evident in the visual system.

Pattern recognition

Although our ability to recognise objects seems a fairly effortless affair, it is a very complex process. Key research questions you will read about concern how we recognise objects that are partially hidden, how we recognise the same object at different distances and in different orientations, and how we categorise a diverse range of stimuli as the same object.

Theories of pattern recognition

Template-matching theory

Perhaps the most obvious theory of pattern or object recognition is that we have internalised ‘templates' stored in memory for every pattern or object (e.g. Ullman, 1989). Recognising an object means matching a visual stimulus with the most similar template. Through experience we may acquire a large library of such templates.

Evaluation

This theory explains very little about pattern recognition, since it suggests that all we do is match a visual stimulus with its unique representation in memory. The main problem for this theory is that recognition of even the simplest object (such as the letter A) requires us to store a template for every possible A we might come across. Given the number of different printing fonts that exist and the large differences in handwriting between people, this theory seems implausible. Furthermore, the theory predicts that the more templates that are stored then the longer it should take to find a matching template. Yet we know that, generally, the more knowledge people have the quicker they respond.

Type the letter A in a word processing package and run through the different fonts available. Notice how the features of the letter can change between fonts in quite dramatic ways.

Feature detection theories

According to feature detection theory, each object will have critical features that enable it to be recognised. Other features that are less critical may or may not be present. For example, the letter A has two diagonal lines and a connecting cross-bar as its critical features, for example, /-\. If we read an A that looks like this: A with the cross-bar protruding beyond one of the diagonals, then this would be a non-critical feature that could be ignored.

Evaluation

Evidence that supports feature theory has been obtained by Neisser (1964) who compared the time taken to recognise a straight-lined letter (Z) among other straight-lined letters (W, V) or among letters existing of curved lines (O, G). Performance was faster when both the target and the other letters were straight-lined letters. The main problem is that the theory has to stipulate structural relations between the features of the object, otherwise recognition errors would be more commonplace than they are. For example, \-/ and /-\ contain the same features but only the latter could be classified as an A.

 

RECORDINGS OF INDIVIDUAL NEURONS IN THE VISUAL CORTEX

One prediction of feature detection theory is that if a visual stimulus is processed according to its features then it should be possible to find individual neurons that only respond to specific features. Hubel and Wiesel (1968) identified three types of cells that appeared to be specialised for the detection of different visual stimuli:

•  Simple cells respond to a dot in one part of the visual field or to a line at one particular angle and no other. Large numbers of these cells cover all of the visual field, collecting simple information about dots and lines.

 

•  Complex cells receive information from many simple cells and combine the information about lines at particular angles in the visual field.

 

•  Hyper complex cells receive information from complex cells, and these appear to respond to simple figures and shapes. The behaviour of these cells appears to support feature detection theory. However, more recent research suggests that spatial frequency is more important than individual features in pattern recognition (Sekular and Blake, 1985). Spatial frequency concerns the amount of light–dark contrast between the lines of a particular pattern. While many letters having many features in common (e.g., K and N) do not get confused when presented very briefly, letters with similar spatial frequencies tend to be confused.

 

Prototype theories

Another idea is that objects are stored in memory in some prototypical or idealised form (for example, we may have a mental picture of what a chair looks like). Each prototype contains features that are central to the object (e.g. legs), but all features need not be present (e.g. arm rests).

A prototype represents an ‘average' image of the object, abstracted from the many different forms the object can take. For example, the character A represents a prototype of the letter, since it has the essential features. However, each feature can vary to some extent, as in A. Although like template theory, prototype theory is based on the notion of matching a sensory input with stored representations in memory, prototype theory is the more parsimonious of the two. This is because (a) it appears to be amore flexible approach (since prototypes can be updated continuously with new experiences), and (b) fewer representations need to be stored.

Psychologists prefer theories that are parsimonious or explained using only one or two basic assumptions. A theory that is dependent upon a large number of assumptions or one that has a degree of complexity is low on parsimony. All else being equal, if two theories can account for the same observation then the more parsimonious of the two is preferred.

Evaluation

Prototypes as average images of objects suffer from the problem that they discard too many vital features. For example, by definition, an average image does not have much variability. But the same object (e.g. a chair) can exist in many different varieties.

Face recognition

An example of a model of pattern recognition in a particular domain is the model of face recognition of Bruce and Young (1986). Imagine a face that you know well. You will have seen that face in many different ways, such as with different emotional expressions, at different distances and in different lighting conditions, at different angles and even the same face several years later when it has changed with age. This phenomenon is one example of pattern invariance. Our ability to recognise it often appears to be effortless and automatic. According to Haig (1984), this ability arises not just from the recognition of particular facial features but also from the detection of the way the facial features are combined (for example, the spaces between individual features).

Pattern invariance refers to the fact that, despite seeing a pattern in different orientations and in different lighting conditions and so on, we recognise it as the same pattern. The Bruce and Young (1986) model is based on experimental evidence as well as evidence from individuals with clinical disorders that leave them with an impaired ability to recognise faces. The model has several key assumptions:

•  Faces we know are stored as recognition units. When we see a known face, its corresponding recognition unit is activated automatically, bypassing any need to analyse their facial features. However, unknown faces do not have corresponding recognition units, therefore in recognising them their facial features need to be analysed.

•  Facial recognition units are associated with information we know about that person (semantic information), but not their name. In the model, person names are stored separately from both recognition units and associated semantic information.

 

Furthermore, it is claimed that there is no direct link between a name and a face in memory, and that in putting a name to a face we have to draw on semantic information. Evidence that supports the model comes from Bruce and Valentine (1985) who showed that names are not very good aids for priming the recognition of faces. People can recognise that a presented face is one that has been presented previously, but associating a name with each face neither helped nor hindered the recognition process. The disorder known as prosopagnosia provides clinical evidence for the model. In one study by Bruyer et al. (1983), an individual with prosopagnosia could learn to recognise new faces but could not recognise faces of people he knew. Another patient could identify familiar faces but had great difficulty in matching up photographs of unfamiliar ones taken at different angles or with different expressions. The evidence suggests that the difficulties experienced by prosopagnosic patients occur because of brain damage to specific face processing mechanisms rather than a general inability to make precise discriminations.

ction 3

Visual perception

You will be studying the main theories of visual perception, such as Gestalt theory, Gibson's theory of direct perception and Gregory's constructivist theory of perception. As you read about these theories, you will also learn about a number of research methodologies and findings in visual perception.

Gestalt theory

In the 1930s the Gestalt psychologists (such as Koffka, Köhler and Wertheimer) investigated how we perceive objects and visual forms. They argued that we constantly search for a ‘good fit' between the visual image and stored memories of visual objects. This usually happens very quickly since visual objects naturally have organised patterns and these are only minimally related to an individual's past experience.

The word Gestalt is German for ‘organised whole', and the theory they developed reflects a holistic approach to explaining visual perception. Several principles of perception were defined, known as the Laws of Prägnanz.

•  Law of proximity

Stimuli that are close together are seen as forming a group, even if they are not similar (e.g. HHH THT YY XCV).

•  Law of similarity.

Stimuli that are similar tend to be grouped together (e.g. IIISSSIIISSSIII).

•  Law of good continuation

Stimuli that are simple are preferred to more complex ones (e.g. lines that follow a smooth course are preferred over ones that make a sharp turn).

•  Law of closure

Figures that can be closed:

[ ] [ ] [ ]

are given processing priority over figures that are unconnected:

]][[]][

 

Evaluation

Gestalt theory laid much of the groundwork for the study of how we detect object boundaries and how we can separate visual objects from each other and from the background. However, such perception may be dependent upon an individual's past visual experience and the evidence for this is reviewed later in this section.

Gibson's theory of direct perception

Gibson (1979, 1986) argued that perception is a bottom-up process, which means that sensory information is analysed in one direction: from simple analysis of raw sensory data to ever increasing complexity of analysis through the visual system. Gibson attempted to give pilots training in depth perception during the Second World War, and this work led him to the view that our perception of surfaces was more important than depth/space perception. Surfaces contain features sufficient to distinguish different objects from each other. In addition, perception involves identifying the function of the object: whether it can be thrown or grasped, or whether it can be sat on, and so on.

Psychologists distinguish between two types of processes in pattern recognition: bottom-up processing and top-down processing.

•  Bottom-up processing is also known as data-driven processing, because perception begins with the stimulus itself. Processing is carried out in one direction from the retina to the visual cortex, with each successive stage in the visual pathway carrying out ever more complex analysis of the input.

 

•  Top-down processing refers to the use of contextual information in pattern recognition. For example, understanding difficult handwriting is easier when reading complete sentences than when reading single and isolated words. This is because the meanings of the surrounding words provide a context to aid understanding. There are many experimental examples of the influence of top-down processing, such as Palmer (1975) who found higher recognition accuracy scores of cartoon facial features when they were presented together rather than in isolation. Another example is given in McClelland, Rumelhart and Hinton (1986), who point out how easy it would be to read a word if one or two of its letters became partially obscured by an ink blob.

 

Central to Gibson's theory is the concept of affordances, and it refers to what the object means to us. For Gibson our nervous system is perfectly attuned for detecting necessary information in the environment. For example, he understood movement and action to be an integral aspect of perception. In real environments people move their bodies and heads in order to understand their visual environment better. Movement of the perceiver or of the objects helps to clarify the boundaries and textures of objects. In addition, the optic array gives important information about movement, such that we can detect whether an object is moving or whether we are moving. The perception of movement, Gibson argued, is not dependent upon developing a perceptual hypothesis since there is enough information in the optic array.

Research on depth perception may provide some evidence for Gibson's theory. The distances of objects are detected in two main ways. First, monocular cues are cues that operate equally with one or two eyes.

These cues are used by artists, who try to indicate distance in a painting.

•  One cue is relative size and is simply the fact that the same object has a smaller retinal image the further away from the viewer it is.

•  Another cue is shadowing, which gives rise to an awareness that one object is in front of another object.

•  A third cue is superposition, which occurs when a close object obscures parts of a more distant object.

•  Another important cue is texture gradient, which can be observed by comparing the texture details of near and distant objects: objects at a distance appear to be smoother in appearance and grey in colour, while near objects have clear, sharp colour and are more detailed.

•  Motion parallax is another important cue to depth and can be observed when looking out of the window on a moving vehicle: objects in the distance appear to move more slowly than do near objects. Second, binocular cues are those that arise from the two retinal images obtained with two eyes.

•  The difference between the two retinal images, known as binocular disparity, can give rise to distance (this difference can be directly experienced by holding a pencil at arm's length and closing one eye and then the other).

•  Another binocular cue is convergence, which is the movement of the eye muscles as we focus on near and distant objects. Such visual cues to depth could be taken as evidence of direct perception, since they do not seem to be dependent upon top-down processes.

 

This approach has been known as the ecological approach because it recognises that stimuli mean something to the perceiver. In this sense perception is said to be a direct decoding of information.

Gregory's theory of perception

Gregory argued that perception is a constructive process which relies on top-down processing. For Gregory, perception involves making inferences about what we see and trying to make a best guess. Prior knowledge and past experience, he argued, are crucial in perception. When we look at something, we develop a perceptual hypothesis, which is based on prior knowledge. The hypotheses we develop are nearly always correct. However, on rare occasions, perceptual hypotheses can be disconfirmed by the data we perceive. Several studies using visual illusions provide support for Gregory's theory. The Müller -Lyer illusion consists of two equal length parallel lines, which appear to be different in length when one line has inwardly pointing fins and the other has outwardly pointing fins (Figure 2.1). Gregory (1963) argued that the illusion occurs because it brings in cues of our visual knowledge of the world. For example, the left figure can appear to represent the inside corner of a room, and the other the outside corner. In using depth cues, incorrect perceptual hypotheses are created, and in this case the hypothesis is a mismatch between past experience of depth and the raw sensory information.

 

Figure 2.1 The Müller-Lyer illusion

Object constancy and visual illusions

We can perceive an object as the same despite different viewing conditions, and this is known as object constancy. In shape constancy, the retinal image of a cup being viewed from above and the retinal image of it being viewed on its side are quite different yet it is perceived as the same object. Other constancies concern the object's size (which varies with distance from the perceiver) and the object's colour (which appears to remain constant even in different lighting conditions).

According to Gregory (1963), many visual illusions occur due to misapplied constancy scaling. One example is the Ponzo illusion, in which two horizontal lines appear to be of unequal length when enclosed between two converging lines (Figure 2.2). Another example is the Necker cube, which is a line drawing of a hollow cube that appears to change its orientation as it is viewed (Figure 2.3). Gregory argued that this object appears to flip between orientations because the brain develops two equally plausible hypotheses and is unable to decide between them. Other types are the paradoxical illusions which consist of figures that seem plausible initially but are physically impossible. Examples are the ‘impossible triangle' and the paintings by M. C. Escher of which Waterfall is probably the best known. Gregory argues that in these illusions the brain develops more than one hypothesis, but these contradict one another. This results in a paradox and gives rise to the illusion.

Gibson argued strongly against the idea that perception involves top-down processing and criticises Gregory's discussion of visual illusions on the grounds that they are artificial examples and not images found in our normal visual environments. This is crucial because Gregory accepts that misperceptions are the exception rather than the norm. Illusions may be interesting phenomena, but they might not be that informative about the debate.

Figure 2.2 The Ponzo illusion Figure 2.3 The Necker cube

 

In the experiments by Tulving et al. (1964), words had to be identified as quickly and as accurately as possible. Some of the words were presented quite briefly and others were presented for longer durations.

Also, on some trials the word appeared after a semantically related sentence (a sentence that related to the meaning of the word) and on other trials the word followed a semantically unrelated sentence. If the word was presented very briefly, then the sentence helped participants to recognise the word accurately. This demonstrates the importance of contextual or top-down influences in perception. However, when the word was presented for a longer duration, the sentence neither helped not hindered recognition accuracy of the word. This implies that when viewing conditions are clear, top-down or contextual processing was not required, and that visual processing can proceed in a bottom-up manner.

There is evidence that what Gibson referred to as affordances can be influenced by top down processes such as expectation, motivation and emotion.

•  Expectation

Bruner and Minturn (1955) presented either letters or numbers to their participants, and then showed the man ambiguous figure that was a cross between B and 13. Participants shown letters perceived the figure as B, those shown the numbers saw 13. In addition, when they were later asked to draw it, their drawings of the figure were unambiguous. Thus perception of an ambiguous object can be influenced by what one expects or anticipates.

 

•  Motivation

The longer individuals are deprived of food, the more likely they are to perceive ambiguous pictures as food-related (Sandford, 1936). In a similar study, food deprivation was associated with rating pictures of food as being visually brighter than other pictures (Gilchrist and Nesberg, 1952).

•  Emotion

The Crucial Study on Lazarus and McCleary below demonstrates that emotional associations with stimuli and events can influence our perception of them.

 

EMOTION AND PERCEPTION

In Lazarus and McCleary (1951), nonsense syllables were shown to participants and some of the syllables were paired with a small electric shock. Their responses to later presentations of the nonsense syllables were monitored by recording their galvanic skin responses (GSR). It was found that participants had increases in GSR when presented with the nonsense syllables that had been paired with a shock. However, and more significantly, when the nonsense syllables were presented subliminally (at presentation rates so brief that people report not seeing them), those previously paired with a shock elicited a marked increase in GSR. This study demonstrates that perception, and especially learning, involves past emotional experiences, even when perception does not involve conscious awareness.

ection 4

Developmental aspects of perception

You will be studying the question of whether the ability to perceive the world is given at birth or whether it is critically dependent upon exposure to a visual environment. You will encounter a number of theories on this issue. The issue of nature versus nurture appears elsewhere in psychology (and is taken up in Chapter 6 where we focus on whether language ability is innate or not) and hence is one of the major debates.

Perception of patterns

Salapatek (1975) used an eye-tracking device and observed significant changes in the perception of patterns in infants. For example, a one-month-old infant tends to look at the edges of a figure rather than the inside, while at two months the internal features of the object begin to be investigated. However, up to about two years, the infant spends more time looking at the edges and contours of objects rather than the internal features. This is known as the externality effect (Bushnell, 1979) and is likely to be due to the fact that processing of contrast, which is necessary for processing features, has yet to be developed in the visual system.

 

THE VISUAL CLIFF

Infants are placed on a ‘visual cliff', which is a transparent platform placed over a checkerboard pattern. Babies between 6 and 12 months of age were reluctant to crawl over the ‘cliff' edge, even when called by their mothers (Gibson and Walk, 1960). This suggests that the infants perceived the drop and that depth perception is innate. However, since the infants were six months old, depth perception may have developed in this time.

Other evidence of innate depth perception comes from studies that show that newly born kittens as well as small ducklings refuse to go over the visual cliff. Later studies on infants revealed that depth perception may not be innate for humans. The heart-rate of two month- old infants placed on the edge of the visual cliff tends to decrease (Campos et al., 1970) which implies that the infant is interested in the visual aspects of the apparatus. If the infant were afraid of the visual cliff (and hence perceived depth) then it would have shown an increase in its heart-rate. This study was replicated in Schwartz et al. (1973) who further showed that increases in heart-rate in response to the visual cliff occur at the time when the infant develops mobility. Thus before this point the infant merely perceives a difference which stimulates interest, but does not perceive depth.

 

Retinal image

Bower et al. (1970) presented infants with one object just out of reach and another object at twice the distance but twice the size. This results in the infant perceiving the two different objects with the same retinal image size. Infants were found to be significantly more likely to reach for the nearer object of the two, which suggests that there may be some innate aspects to depth perception. However, this method can only be tested on infants when they are able to reach out with their arms, and hence the possibility that depth perception may begin to develop about the time the infant is able to do this cannot be ruled out.

Defending against approaching objects

An interesting method of testing for infants' abilities in depth perception is to monitor their responses to approaching objects, especially those on a collision course. Two-week old infants were found to move their arms and head as if to defend themselves against the object, suggesting that some depth perception is innate (Bower et al., 1970). Other studies have found that infants can even discriminate between approaching objects that will hit them and approaching objects that will miss them (Ball and Tronick, 1971). However, since small head and arm movements can be interpreted as either under-developed defending actions or as random movements, this evidence for innate perception of depth is inconclusive.

Developmental theories of perception

Piaget's enrichment theory

Piaget‘s (1952) enrichment theory is that perception develops with the infant interacting with the world by performing operations and noticing the results of their actions. The sophistication of these operations is said to develop over several stages.

The most critical period of perceptual development is the sensorimotor stage. In this first stage, infants under the age of two years learn to coordinate their sensory and motor skills. The infant relies on innate sensorimotor schemas, such as mouthing, grasping and touching objects. These innate schemas develop through experience by comparing new sensory information with the existing schema. Piaget emphasised the influence of the infant's action in its perceptual development; however, this influence may be overestimated since in one study, infants who had the most crawling experience showed no more depth perception than other infants (Arterberry et al., 1989). In addition, five-month old infants who have limited ability and experience with independent mobility do not tend to reach for objects that are out of reach.

Shaffer's three-stage theory

Shaffer (1990) argued that there is not one but three stages of perceptual development during the first year.

•  The first stage, 0–2 months, is described as a stimulus seeking stage in which the infant develops the ability to make general visual discriminations between stimuli.

•  In the second stage, 2–6 months, which is described as a form constructing stage, infants can perceive numerous forms and shapes.

•  In the third stage, 6–12 months, which is described as a form interpretation stage, infants begin to make sense of what they perceive.

 

Findings from the visual cliff studies tend to lend support for the existence of these stages.

The nature–nurture debate

The key question is whether perception is innate or whether it is nurtured by the environment. According to the above studies, many aspects of perception appear to be innate, although the evidence can be difficult to interpret when using infants. Other methods for answering the question include distortion studies, readjustment studies, deprived environment studies and cross-cultural studies.

•  Distortion studies. In the late nineteenth century, G. M. Stratton developed a method of dramatically altering his visual world by wearing a lens on one eye that turned the world upside down (with the other eye covered). Within five days he reported that he could walk around and write comfortably. In total, he wore them for eight days, after which the world he saw was immediately recognised. This shows that the visual system is highly flexible and adaptable. Hess (1956) placed a similar prism lens to chickens, with the result that they never completely learned to adapt, showing that the visual system of animals may be less adaptive than that of humans.

 

•  Readjustment studies. SB gained sight at the age of 52, having been blind from birth. Within only a few days he began to understand his visual sensations. However, aspects such as depth perception and understanding of visual forms were only partially acquired, and his visual sensations were at times more of a hindrance than a help, and he often preferred to sit in darkness in the evenings (Gregory and Wallace, 1963). The implication is that visual abilities are either innate (and degenerate without use) or that they require experience to develop. Von Senden (1932) presented a summary of 66 such cases and concluded that some aspects of vision appear to be innate (identifying a figure from the background and visually tracking an object) while others are learned (depth perception and identification of more complex visual forms).

 

•  Deprived environments. Riesen (1950) raised chimpanzees in total darkness until the age of 16 months and found that their perception of simple forms was severely impaired. Wiesel (1982) sewed one eye of a kitten shut and found that if it is done early enough the eye remains blind. Blakemoore and Cooper (1970) found that by restricting the animal's visual environment from birth, it found the perception of certain visual forms extremely difficult (see the Crucial Study on Blakemoore and Cooper's restricted visual environment study below).

 

•  Cross-cultural studies. Segall et al. (1966) found that people from Zulu tribes were unable to perceive the Muller-Lyer illusion. This might imply that because their visual environment contains few rectangles, straight lines and regular corners, they were unaffected by top-down processing (and hence implying the importance of environmental influences in perception). Annis and Frost (1973) found that Canadian Cree Indians who lived in the countryside were very good at determining whether two lines were parallel regardless of whether they were presented as diagonally, vertically or horizontally, yet Cree Indians who lived in the city performed poorly when the lines were presented diagonally. The explanation offered is that exposure to the vertical and horizontal lines of the city makes perception of diagonal lines more difficult.

 

Other studies such as Gregor and McPherson (1965) found no differences between rural and urban dwelling Aborigines on a number of visual illusions. A problem with cross-cultural studies is that they rely on self-report measures and their verbal responses may be difficult to interpret accurately. Another is the fact that they have been based mainly on two-dimensional visual illusions and may tell us little about visual perception in the natural visual world.

Typical Exam Questions

1. How do we perceive colour?

2. Compare and contrast two theories of object recognition.

3. Examine the evidence for the view that the visual system receives enough information from the environment for perception to be accurate.

4. To what extent is visual perception dependent upon exposure to a visual environment?

Section 6

Further reading

You would have to search hard to find a better text on perception than:

Coren, S.,Ward, L. M. and Enns, J. T. (1994) Sensation and Perception. New York : Harcourt Brace.


Next >>


Write up your lab report with this unique application. www.labwriteup.com  
 
 
 

This book was first published in 2003 by Crucial, a division of Learning Matters Ltd [ISBN 1 903337 13 5] © 2003 Eamon Fulcher; © 2009 GEFT Consultance Services (geft.co.uk).

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior permission in writing from Geft Consultancy Services, who may be contacted via www.geft.co.uk.