Research Design
In the present research, three experiments were carried out.
Subjects
In total, two hundred and eleven female and eighty-six male university students participated in all three experiments.
Materials
In Experiments 1 and 2, a set of eight seamless digital angry-happy movies, consisting of 100 facial composites was used. In Experiment 3, angry-sad, instead of angry-happy, movies was used. The validity of the actors’ facial expressions was pretested on an independent group of eighty-three participants. The psychological midpoint of the movies was determined by an independent group of twenty-three participants for angry-happy movies and thirty participants for angry-sad movies in a pretest.
Procedure
Experiment 1 was divided into two parts, namely the presentation phase and the recognition phase.
In the presentation phase, participants were given 3 angry and 3 happy experimental trials. Each target face appeared on the screen for 70 seconds, with an instruction, “Explain why this person is feeling [angry/happy]” below. Participants were divided into two groups, namely verbalizers and imaginers. Verbalizers were told to recite their explanation into a tape recorder while imaginers were told to just construct their stories in their head at the present stage.
After a 30-minute unrelated filler study, the recognition phase began. Participants were asked to view the entire movie with the sliding bar first and then to indicate which face was identical to one of the faces they told a story about.
Same as Experiment 1, Experiment 2 consisted of two phases – the presentation and recognition phase. However, participants in Experiment 2 were divided into three conditions, namely explanation, label and control.
During the presentation phase, participants were asked to view 6 targeted faces used in Experiment 1. In the explanation condition, half of the faces were paired with an angry concept while the other half were paired with a happy concept. Participants were told to explain why target faces expressed such emotions. In the label condition, half of the faces were labeled as angry while the other half were labeled as happy. In the control condition, no prompt was given for the faces. Participants in the label and control conditions were only asked to simply view the targeted faces.
After a 30-minute unrelated filler study, participants took part in the recognition phase which was the same as that of Experiment 1.
The procedure of Experiment 3 was identical to that of Experiment 2, but angry-sad movies and target faces were used instead.
Results
Experiment 1
The experiment revealed a main effect of category, F(1, 73) = 98.46, p < .001, such that participants recalled faces as being angrier when they explained angry faces than when they explained happy faces. This biasing effect was significant when participants told angry stories, t(74) = 12.46, p < .001, but was only marginally significant when they told happy stories, t (74) = 1.84, p = .07. .
Experiment 2
A main effect of emotion category emerged F(1, 101) = 22.01, p< .001, such that faces which were paired with an angry concept were recalled angrier than those which were paired with a happy concept. The biasing effect was significant for the explanation group, t(34) = 4.80, p < .001; marginally significant for the label group, t(36) = 1.92, p = .06; and nonsignificant for the control group, t(31) = 1.00, ns.
Moreover, the midpoints of happy-categorized faces estimated by the explanation and label groups were significantly less happy than those estimated by pretest participants, t(34) = 2.84, p < .01, and t(36) = 2.15, p < .05.
Experiment 3
A main effect of emotion category emerged, F(1, 104) = 17.50, p < .001, such that faces which were paired with an angry concept were recalled as angrier than those which were paired with a sad concept. The biasing effect was significant for the explanation group, t(34) = 4.44, p < .001, marginally significant for the label group, t(35) = 1.78, p = .08, and nonsignificant for the control group, t(35) = 0.25, ns. For faces which were paired with a sad concept, only participants in the explanation group recalled them as sadder than they actually presented, t(34) = 3.50, p = .001.
Moreover, the midpoints of anger-categorized faces estimated by all three encoding groups were significantly less angry than those estimated by pretest participants, t(33) = 2.66, p < .05, t(33) = 2.86, p < .01, and t(34) = 2.04, p = .05.
Implications
The current studies showed that perceptual memory for emotional facial expressions was biased by the specific emotion concepts used at encoding. This biasing effect occurred for not only the relatively rare and artificial angry-happy blends, but also the most frequently reported angry-sad emotion blends. The degree of bias depended on how the expression was conceptualized by the perceiver, and it was found that the effect was the strongest when facial expressions were explained by the perceiver, instead of just being given the labels. Furthermore, the biasing effect fully emerged only when comparing people’s recognition judgment with their own subjective midpoints.
Critical response & logical reasoning
The current experiments have important implications for our daily life. They help us to understand the interaction of language, memory and emotion processing, as well as the nature and function of emotional expressions. Most important of all, they help us to understand human beings’ social information processing based on their perception and memories of emotional interactions.
However, the present research has a few weaknesses.
Firstly, recognition and midpoint judgments in the experiments were malleable. They are easily influenced by participants’ differences in perceptual memory, emotional state, sex and culture. In the present experiments, most participants were females and all participants were studying in the same University. The variety in the subject pool is narrow. Therefore, it is suggested that increasing variety of the subject pool can increase the validity of the present research.
Secondly, the number of types of facial emotional expressions tested was small. In the present experiments, only two facial emotional expressions, namely angry-happy and angry-sad, were tested. The generalizability of the present research can be raised by testing more types of facial emotional expressions, such as angry-disgust and happy-sad.
Thirdly, the current experiments do not find out whether there is any difference between featural decomposition and configural decomposition in biasing the perceptual memory for emotional expressions. In Experiment 1, two-thirds of the participants explain the facial expression through configural decomposition while the rest explain through featual decomposition. A more direct test is needed to find out whether the ways participants explain facial emotional expressions and the specific content of their explanations have any significance in biasing.
To conclude, the present research contributes much in our understanding of human beings’ social information processing. However, further researches can be conducted to increase the validity and generalizability of the research.
Reference
Halberstadt, Jamin B. and Niedenthal, Paula M. (2001). Effects of Emotion Concepts on Perceptual Memory for Emotional Expressions. Journal of Personality and Social Psychology, 81 (4), 581-598.