Relating Facial Trustworthiness to Antisocial Behavior in Adolescent and Adult Men
Here, we investigate how facial trustworthiness-a socially influential appearance variable-interrelates with antisocial behavior across adolescence and middle adulthood. Specifically, adolescents who look untrustworthy may be treated with suspicion, leading to antisocial behavior through expectancy effects. Alternatively, early antisocial behaviors may promote an untrustworthy appearance over time (Dorian Gray effect). We tested these expectancy and Dorian Gray effects in a longitudinal study that followed 206 at-risk boys (90% White) from ages 13-38 years. Parallel process piecewise growth models indicated that facial trustworthiness (assessed from photographs taken prospectively) declined during adolescence and then stabilized in adulthood. Consistent with expectancy effects, initial levels of facial trustworthiness were positively related to increases in antisocial behavior during adolescence and also during adulthood. Additionally, higher initial levels of antisocial behavior predicted relative decreases in facial trustworthiness across adolescence. Adolescent boys' facial appearance may therefore both encourage and reflect antisocial behavior over time.
Superior Communication of Positive Emotions Through Nonverbal Vocalisations Compared to Speech Prosody
The human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners' (total = 400) recognition performance for 22 positive emotions from nonverbal vocalizations ( = 880) to that from speech prosody ( = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody.
Paralinguistic Features Communicated through Voice can Affect Appraisals of Confidence and Evaluative Judgments
This article unpacks the basic mechanisms by which paralinguistic features communicated through the voice can affect evaluative judgments and persuasion. Special emphasis is placed on exploring the rapidly emerging literature on vocal features linked to appraisals of confidence (e.g., vocal pitch, intonation, speech rate, loudness, etc.), and their subsequent impact on information processing and meta-cognitive processes of attitude change. The main goal of this review is to advance understanding of the different psychological processes by which paralinguistic markers of confidence can affect attitude change, specifying the conditions under which they are more likely to operate. In sum, we highlight the importance of considering basic mechanisms of attitude change to predict when and why appraisals of paralinguistic markers of confidence can lead to more or less persuasion.
Perceived Epistemic Authority (Source Credibility) of a TV Interviewer Moderates the Media Bias Effect Caused by His Nonverbal Behavior
The Media Bias Effect (MBE) represents the biasing influence of the nonverbal behavior of a TV interviewer on viewers' impressions of the interviewee. In the MBE experiment, participants view a 4-min made-up political interview in which they are exposed only to the nonverbal behavior of the actors. The interviewer is friendly toward the politician in one experimental condition and hostile in the other. The interviewee was a confederate filmed in the same studio, and his clips are identical in the two conditions. This experiment was used successfully in a series of studies in several countries (Babad and Peer in J Nonverbal Behav 34(1):57-78, 2010. 10.1007/s10919-009-0078-x) and was administered in the present research. The present investigation focused on the interviewer's source credibility and its persuasive influence. The viewers filled out questionnaires about their impressions of both the interviewer and the interviewee. A component of "interviewer's authority" was derived in PCA, with substantial variance in viewers' impressions of the interviewer. In our design, we preferred the conception of Epistemic Authority (Kruglanski et al. in Adv Exp Soc Psychol 37:345-392, 2005)-based on viewers' subjective perceptions for deriving authority status-to the conventional design of source credibility studies, where dimensions of authority are pre-determined as independent variables. The results demonstrated that viewers who perceived the interviewer as an effective leader demonstrated a clear MBE and were susceptible to his influencing bias, but no bias effect was found for viewers who did not perceive the interviewer as an effective leader. Thus, Epistemic Authority (source credibility) moderated the Media Bias Effect.
Nonverbal Auditory Cues Allow Relationship Quality to be Inferred During Conversations
The claim that nonverbal cues provide more information than the linguistic content of a conversational exchange (the Mehrabian Conjecture) has been widely cited and equally widely disputed, mainly on methodological grounds. Most studies that have tested the Conjecture have used individual words or short phrases spoken by actors imitating emotions. While cue recognition is certainly important, speech evolved to manage interactions and relationships rather than simple information exchange. In a cross-cultural design, we tested participants' ability to identify the quality of the interaction (rapport) in naturalistic third party conversations in their own and a less familiar language, using full auditory content versus audio clips whose verbal content has been digitally altered to differing extents. We found that, using nonverbal content alone, people are 75-90% as accurate as they are with full audio cues in identifying positive vs negative relationships, and 45-53% as accurate in identifying eight different relationship types. The results broadly support Mehrabian's claim that a significant amount of information about others' social relationships is conveyed in the nonverbal component of speech.
Mining Bodily Cues to Deception
A significant body of research has investigated potential correlates of deception and bodily behavior. The vast majority of these studies consider discrete, subjectively coded bodily movements such as specific hand or head gestures. Such studies fail to consider quantitative aspects of body movement such as the precise movement direction, magnitude and timing. In this paper, we employ an innovative data mining approach to systematically study bodily correlates of deception. We re-analyze motion capture data from a previously published deception study, and experiment with different data coding options. We report how deception detection rates are affected by variables such as body part, the coding of the pose and movement, the length of the observation, and the amount of measurement noise. Our results demonstrate the feasibility of a data mining approach, with detection rates above 65%, significantly outperforming human judgement (52.80%). Owing to the systematic analysis, our analyses allow for an understanding of the importance of various coding factor. Moreover, we can reconcile seemingly discrepant findings in previous research. Our approach highlights the merits of data-driven research to support the validation and development of deception theory.
Are You on My Wavelength? Interpersonal Coordination in Dyadic Conversations
Conversation between two people involves subtle nonverbal coordination in addition to speech. However, the precise parameters and timing of this coordination remain unclear, which limits our ability to theorize about the neural and cognitive mechanisms of social coordination. In particular, it is unclear if conversation is dominated by synchronization (with no time lag), rapid and reactive mimicry (with lags under 1 s) or traditionally observed mimicry (with several seconds lag), each of which demands a different neural mechanism. Here we describe data from high-resolution motion capture of the head movements of pairs of participants (= 31 dyads) engaged in structured conversations. In a pre-registered analysis pathway, we calculated the wavelet coherence of head motion within dyads as a measure of their nonverbal coordination and report two novel results. First, low-frequency coherence (0.2-1.1 Hz) is consistent with traditional observations of mimicry, and modeling shows this behavior is generated by a mechanism with a constant 600 ms lag between leader and follower. This is in line with rapid reactive (rather than predictive or memory-driven) models of mimicry behavior, and could be implemented in mirror neuron systems. Second, we find an unexpected pattern of lower-than-chance coherence between participants, or hypo-coherence, at high frequencies (2.6-6.5 Hz). Exploratory analyses show that this systematic decoupling is driven by fast nodding from the listening member of the dyad, and may be a newly identified social signal. These results provide a step towards the quantification of real-world human behavior in high resolution and provide new insights into the mechanisms of social coordination.
Efficient Collection and Representation of Preverbal Data in Typical and Atypical Development
Human preverbal development refers to the period of steadily increasing vocal capacities until the emergence of a child's first meaningful words. Over the last decades, research has intensively focused on preverbal behavior in typical development. Preverbal vocal patterns have been phonetically classified and acoustically characterized. More recently, specific preverbal phenomena were discussed to play a role as early indicators of atypical development. Recent advancements in audio signal processing and machine learning have allowed for novel approaches in preverbal behavior analysis including automatic vocalization-based differentiation of typically and atypically developing individuals. In this paper, we give a methodological overview of current strategies for collecting and acoustically representing preverbal data for intelligent audio analysis paradigms. Efficiency in the context of data collection and data representation is discussed. Following current research trends, we set a special focus on challenges that arise when dealing with preverbal data of individuals with late detected developmental disorders, such as autism spectrum disorder or Rett syndrome.
Identifying Signatures of Perceived Interpersonal Synchrony
Interpersonal synchrony serves as a subtle, yet powerful bonding mechanism in social interactions. Problematically, the term 'synchrony' has been used to label a variety of distinct aspects of interpersonal coordination, such as postural similarities or movement activity entrainment. Accordingly, different algorithms have been suggested to quantify interpersonal synchrony. Yet, it remains unknown whether the different measures of synchrony represent correlated features of the same perceivable core phenomenon. The current study addresses this by comparing the suitability of a set of algorithms with respect to their association with observers' judgments of dyadic synchrony and leader-followership. One-hundred fifteen observers viewed computer animations of characters portraying the movements of real dyads who performed a repetitive motor task with instruction to move in unison. Animations were based on full-body motion capture data synchronously collected for both partners during the joint exercise. Results showed most synchrony measures significantly correlated with (a) perceived synchrony and (b) the perceived level of balance of leading/following by each dyad member. Phase synchrony and Pearson correlations were associated most strongly with the observer ratings. This might be typical for intentional, structured forms synchrony such as ritualized group activities. It remains open if these findings also apply to spontaneous forms of synchrony as, for instance, occurring in free-running conversations.
The Association of Embracing with Daily Mood and General Life Satisfaction: An Ecological Momentary Assessment Study
Embracing has several positive health effects, such as lowering blood pressure and decreasing infection risk. However, its association with general life satisfaction and daily mood has not been researched in detail. Here, we used a smartphone-based ecological momentary assessment (EMA) approach to monitor the daily number of embraces and daily mood in a sample of 94 adults over the course of seven days. We found that embracing frequency differed slightly over the week, with embracing occurring more frequently on weekends than on weekdays. We also found that higher daily embracing frequencies were associated with better daily mood using multilevel modeling. Only singles benefitted from increases in average embracing regarding their life satisfaction, whereas individuals in a relationship were unaffected by their embracing tendencies. Although our results are strictly correlational and do not indicate any direction or causality, embraces may be important for daily mood and general life satisfaction, but their efficacy seems to depend on relationship status.
Nonverbal Synchrony in Technology-Mediated Interviews: A Cross-Cultural Study
Technology-mediated communication has changed the way we interact. Since the onset of the COVID-19 pandemic in March 2020, this trend became even more pronounced. Media interviews are no exception. Yet, studies on nonverbal behaviors, especially nonverbal synchrony during such mediated settings, have been scarce. To fill the research gap, this study investigated synchronized patterns between interview hosts' and guests' facial emotional displays and upper body movement during mediated interviews recorded in the countries in Western (mainly the US, with the addition of the UK) and Eastern cultures (Japan). The interviews were categorized into information- or entertainment-driven interviews, depending on the social attributes of the guest. The time series of the valence in facial displays and upper body movement was automatedly measured using FaceReader and Motion Energy Analysis software, respectively, which was analyzed in terms of simultaneous movements, a primary component of synchrony. As predicted, facial synchrony was more prevalent in information-driven interviews, supporting the motivational and strategic account of synchrony. In addition, female-hosted interviews had a higher degree of synchrony, especially in information-driven interviews. Similar patterns were seen in movement synchrony, although not significant. This study is the first evidence of synchrony in technology-mediated interviews in which a host and a guest appear on split-screen to inform or entertain audiences. However, no cultural differences in synchrony were observed. Situational demands in front of the interactants and the goal-driven nature of communication seemed to play a more prominent role than cultural differences in nonverbal synchrony.
Discriminative and Affective Processing of Touch: Associations with Severity of Skin-picking
Skin-picking is a common behavior in the general population that generally serves emotion regulation (e.g., reduction of tension). However, recent research suggests it may also be associated with changes in tactile processing sensitivity. Along these lines, the present study examined whether the severity of skin-picking (SOSP) is related to discriminative and affective touch processing. A total of 160 participants (59 males, 101 females, mean age = 31 years) completed two tactile discrimination tests (two-point discrimination, surface texture discrimination), as well as a well-validated affective touch paradigm (delivery of soft/slow touch, which is found to be generally pleasant). A hierarchical regression analysis was carried out to investigate the association between SOSP, age, sex, and indicators of tactile sensitivity. Replicating previous findings, females reported higher SOSP. While the performance in the discrimination tests did not predict SOSP, affective touch processing was associated with SOSP. Participants with high SOSP reported an urge to pick their skin after being softly touched. This seems paradoxical since previous findings have suggested skin-picking may be carried out to manage negative affective states. Our findings add to the literature describing altered sensitivity and responsivity to specific tactile stimuli in individuals with excessive skin-picking.
A Sorry Excuse for an Apology: Examining People's Mental Representations of an Apologetic Face
The goal of the current research was to gain an understanding of people's mental representations of an apologetic face. In Study 1, participants' responses were used to generate visual templates of apologetic faces through reverse correlation (Study 1a, = 121), and a new set of participants (Study 1b, = 37 and 1c, = 153) rated that image (group-level Classification Image, CI), as well as either the inverse image (group-level anti-CI in Study 1b) or base face (in Study 1c), on apology-related characteristics. Results demonstrated that people have a mental representation of an apologetic face, and that sadness is an important feature of this template. To examine similarities between mental representations of apologetic and sad faces, participants in Study 2 generated visual templates of sad faces using reverse correlation (Study 2a, = 121). New participants (Study 2b, = 162) were then randomly assigned to rate the averaged face, eyes, and mouths (group-level CIs) as well as the individual visual templates (individual-level CIs) generated from both studies for either how apologetic or sad they appeared. Visual templates of apologetic and sad faces were seen as apologetic, providing evidence of the prominence of sadness in mental representations of apology.
A Quantitative Evaluation of Thin Slice Sampling for Parent-Infant Interactions
Behavioural coding is time-intensive and laborious. Thin slice sampling provides an alternative approach, aiming to alleviate the coding burden. However, little is understood about whether different behaviours coded over thin slices are comparable to those same behaviours over entire interactions. To provide quantitative evidence for the value of thin slice sampling for a variety of behaviours. We used data from three populations of parent-infant interactions: mother-infant dyads from the Grown in Wales (GiW) cohort ( = 31), mother-infant dyads from the Avon Longitudinal Study of Parents and Children (ALSPAC) cohort ( = 14), and father-infant dyads from the ALSPAC cohort ( = 11). Mean infant ages were 13.8, 6.8, and 7.1 months, respectively. Interactions were coded using a comprehensive coding scheme comprised of 11-14 behavioural groups, with each group comprised of 3-13 mutually exclusive behaviours. We calculated frequencies of verbal and non-verbal behaviours, transition matrices (probability of transitioning between behaviours, e.g., from looking at the infant to looking at a distraction) and stationary distributions (long-term proportion of time spent within behavioural states) for 15 thin slices of full, 5-min interactions. Measures drawn from the full sessions were compared to those from 1-, 2-, 3- and 4-min slices. We identified many instances where thin slice sampling (i.e., < 5 min) was an appropriate coding method, although we observed significant variation across different behaviours. We thereby used this information to provide detailed guidance to researchers regarding how long to code for each behaviour depending on their objectives.
An Eye Tracking Investigation of Pain Decoding Based on Older and Younger Adults' Facial Expressions
Nonverbal pain cues such as facial expressions, are useful in the systematic assessment of pain in people with dementia who have severe limitations in their ability to communicate. Nonetheless, the extent to which observers rely on specific pain-related facial responses (e.g., eye movements, frowning) when judging pain remains unclear. Observers viewed three types of videos of patients expressing pain (younger patients, older patients without dementia, older patients with dementia) while wearing an eye tracker device that recorded their viewing behaviors. They provided pain ratings for each patient in the videos. These observers assigned higher pain ratings to older adults compared to younger adults and the highest pain ratings to patients with dementia. Pain ratings assigned to younger adults showed greater correspondence to objectively coded facial reactions compared to older adults. The correspondence of observer ratings was not affected by the cognitive status of target patients as there were no differences between the ratings assigned to older adults with and without dementia. Observers' percentage of total dwell time (amount of time that an observer glances or fixates within a defined visual area of interest) across specific facial areas did not predict the correspondence of observers' pain ratings to objective coding of facial responses. Our results demonstrate that patient characteristics such as age and cognitive status impact the pain decoding process by observers when viewing facial expressions of pain in others.
The Role of Ethnic Prejudice in the Modulation of Cradling Lateralization
The left-cradling bias is the tendency to cradle an infant on the left side, regardless of the individuals' handedness, culture or ethnicity. Many studies revealed associations between socio-emotional variables and the left-side bias, suggesting that this asymmetry might be considered as a proxy of the emotional attunement between the cradling and the cradled individuals. In this study we examined whether adult females with high levels of prejudice toward a specific ethnic group would show reduced left-cradling preferences when required to cradle an infant-like doll with ethnical features of the prejudiced group. We manipulated the ethnicity of the cradled individual by asking 336 Caucasian women to cradle a White or a Black doll and then assessed their prejudice levels toward African individuals. Significant correlations were shown only in the Black doll group indicating that the more the prejudice toward Africans, the more the cradling-side preferences shifted toward the right. Furthermore, participants exhibiting low levels-but not those exhibiting high levels-of ethnic prejudice showed a significant left-cradling bias. These findings show that ethnic prejudice toward the specific ethnic group of the cradled individual can interfere with the left preference in the cradling woman. The present study corroborates our suggestion that the left-cradling bias might be considered as a natural index of a positive socio-communicative relationship between the cradling and cradled individuals. On the contrary, the right-cradling bias might be considered as a cue of the presence of affective dysfunctions in the relationship.
Just Seconds of Laughter Reveals Relationship Status: Laughter with Friends Sounds More Authentic and Less Vulnerable than Laughter with Romantic Partners
The dual pathway model posits that spontaneous and volitional laughter are voiced using distinct production systems, and perceivers rely upon these system-related cues to make accurate judgments about relationship status. Yet, to our knowledge, no empirical work has examined whether raters can differentiate laughter directed at friends and romantic partners and the cues driving this accuracy. In Study 1, raters ( = 50), who listened to 52 segments of laughter, identified conversational partner (friend versus romantic partner) with greater than chance accuracy ( = 0.57) and rated laughs directed at friends to be more pleasant-sounding than laughs directed at romantic partners. Study 2, which involved 58 raters, revealed that prototypical friendship laughter sounded more spontaneous (e.g., natural) and less "vulnerable" (e.g., submissive) than prototypical romantic laughter. Study 3 replicated the findings of the first two studies using a large cross-cultural sample ( = 252). Implications for the importance of laughter as a subtle relational signal of affiliation are discussed.
Justice and Nonverbal Communication in a Post-pandemic World: An Evidence-Based Commentary and Cautionary Statement for Lawyers and Judges
On 11 March 2020, the World Health Organization officially declared COVID-19 a pandemic. The new physical distancing rules have had many consequences, some of which are felt throughout the justice system. Courts across the world limited their operations. Nonetheless, given that justice delayed is justice denied, many jurisdictions have turned to technologies for urgent matters. This paper offers an evidence-based comment and caution for lawyers and judges who could be inclined, for concerns such as cost and time saving, to permanently step aside from in-person trials. Using nonverbal communication research, in conjunction with American and Canadian legal principles, we argue that such a decision could harm the integrity of the justice system.
Identifying Patterns of Similarities and Differences between Gesture Production and Comprehension in Autism and Typical Development
Production and comprehension of gesture emerge early and are key to subsequent language development in typical development. Compared to typically developing (TD) children, children with autism spectrum disorders (ASD) exhibit difficulties and/or differences in gesture production. However, we do not yet know if gesture production either shows similar patterns to gesture comprehension across different ages and learners, or alternatively, lags behind gesture comprehension, thus mimicking a pattern akin to speech comprehension and production. In this study, we focus on the gestures produced and comprehended by a group of young TD children and children with ASD-comparable in language ability-with the goal to identify whether gesture production and comprehension follow similar patterns between ages and between learners. We elicited production of gesture in a semi-structured parent-child play and comprehension of gesture in a structured experimenter-child play across two studies. We tested whether young TD children (ages 2-4) follow a similar trajectory in their production and comprehension of gesture (Study 1) across ages, and if so, whether this alignment remains similar for verbal children with ASD ( = 5 years), comparable to TD children in language ability (Study 2). Our results provided evidence for similarities between gesture production and comprehension across ages and across learners, suggesting that comprehension and production of gesture form a largely integrated system of communication.
Coordinated Collaboration and Nonverbal Social Interactions: A Formal and Functional Analysis of Gaze, Gestures, and Other Body Movements in a Contemporary Dance Improvisation Performance
This study presents a microanalysis of what information performers "give" and "give off" to each other via their bodies during a contemporary dance improvisation. We compare what expert performers and non-performers (sufficiently trained to successfully perform) do with their bodies during a silent, multiparty improvisation exercise, in order to identify any differences and to provide insight into nonverbal communication in a less conventional setting. The coordinated collaboration of the participants (two groups of six) was examined in a frame-by-frame analysis focusing on all body movements, including gaze shifts as well as the formal and functional movement units produced in the head-face, upper-, and lower-body regions. The Methods section describes in detail the annotation process and inter-rater agreement. The results of this study indicate that expert performers during the improvisation are in "performance mode" and have embodied other social cognitive strategies and skills (e.g., endogenous orienting, gaze avoidance, greater motor control) that the non-performers do not have available. Expert performers avoid using intentional communication, relying on information to be inferentially communicated in order to coordinate collaboratively, with silence and stillness being construed as meaningful in that social practice and context. The information that expert performers produce is quantitatively less (i.e., producing fewer body movements) and qualitatively more inferential than intentional compared to a control group of non-performers, which affects the quality of the performance.
Two Means Together? Effects of Response Bias and Sensitivity on Communicative Action Detection
Numerous lines of research suggest that communicative dyadic actions elicit preferential processing and more accurate detection compared to similar but individual actions. However, it is unclear whether the presence of the second agent provides additional cues that allow for more accurate discriminability between communicative and individual intentions or whether it lowers the threshold for perceiving third-party encounters as interactive. We performed a series of studies comparing the recognition of communicative actions from single and dyadic displays in healthy individuals. A decreased response threshold for communicative actions was observed for dyadic vs. single-agent animations across all three studies, providing evidence for the dyadic communicative bias. Furthermore, consistent with the facilitated recognition hypothesis, congruent response to a communicative gesture increased the ability to accurately interpret the actions. In line with dual-process theory, we propose that both mechanisms may be perceived as complementary rather than competitive and affect different stages of stimuli processing.