ENEURO 0245-21 2021 Full
ENEURO 0245-21 2021 Full
ENEURO 0245-21 2021 Full
https://doi.org/10.1523/ENEURO.0245-21.2021
1
Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, Affiliated Mental Health Center
(ECNU), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China and
2
Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544
Abstract
Our lives revolve around sharing emotional stories (i.e., happy and sad stories) with other people. Such emo-
tional communication enhances the similarity of story comprehension and neural across speaker-listener pairs.
The theory of Emotions as Social Information Model (EASI) suggests that such emotional communication may
influence interpersonal closeness. However, few studies have examined speaker-listener interpersonal brain
synchronization (IBS) during emotional communication and whether it is associated with meaningful aspects of
the speaker-listener interpersonal relationship. Here, one speaker watched emotional videos and communi-
cated the content of the videos to 32 people as listeners (happy/sad/neutral group). Both speaker and listen-
ers’ neural activities were recorded using EEG. After listening, we assessed the interpersonal closeness
between the speaker and listeners. Compared with the sad group, sharing happy stories showed a better re-
call quality and a higher rating of interpersonal closeness. The happy group showed higher IBS in the frontal
cortex and left temporoparietal cortex than the sad group. The relationship between frontal IBS and interper-
sonal closeness was moderated by sharing happy/sad stories. Exploratory analysis using support vector re-
gression (SVR) showed that the IBS could also predict the ratings of interpersonal closeness. These results
suggest that frontal IBS could serve as an indicator of whether sharing emotional stories facilitate interpersonal
closeness. These findings improve our understanding of emotional communication among individuals that
guides behaviors during interpersonal interactions.
Key words: emotion; interpersonal brain synchronization; interpersonal closeness; sharing stories
Significance Statement
Despite extensive research on interpersonal communication, little is known about emotional communication
(happy/sad) between speaker and listener and whether these two types of emotional communication involve
differential neurocognitive mechanisms from a brain-to-brain perspective. We address these questions
from the perspective of the brain-to-brain approach and suggest that these two types of emotional commu-
nication are associated with differential interpersonal brain synchronization (IBS), in particular, subserved
by the prefrontal region. Our findings shed light on the effect of sharing emotional (happy/sad) stories on in-
terpersonal closeness and suggest that frontal IBS could serve as an indicator of whether sharing emotional
(happy/sad) stories facilitate interpersonal closeness.
Received June 1, 2021; accepted September 30, 2021; First published Author contributions: Q.Y., R.Z., N.W., and X.L. designed research; E.X.,
November 8, 2021. Q.Y., R.Z., and N.W. performed research; E.X. analyzed data; E.X., K.L.,
The authors declare no competing financial interests. S.A.N., and X.L. wrote the paper.
sharing happy stories will increase interpersonal close- Table 1: Speakers’ comprehension test score
ness more effectively than sharing sad ones. On the neu- Speaker number Happy Neutral Sad Average score
ral level, we expected that sharing happy stories will yield 101 12 13 10 11.67
higher IBS than sharing sad stories in the u band mainly in 102 11 21 18 16.67
the PFC. Finally, we hypothesized that enhanced IBS will 103 21 18 25 21.33
mediate the effect of sharing emotional stories on influ- 104 16 16 20 17.33
encing interpersonal closeness. 105 19 10 25 18.00
106 18 17 12 15.67
107 25 23 26 24.67
Materials and Methods 108 20 16 12 16.00
Participants 109 18 19 15 17.33
110 10 19 11 13.33
A total of 32 participants (age: 21.3 6 2.4 years, 16 fe-
males) were enrolled as listeners in the present study. The rater has included an overall comprehension level out of 10 and the total
Specifically, all the listeners were randomly assigned to score for each subject was out of 30.
listen to the happy stories from a competent speaker as
speaker-listener dyads (15 listeners in the happy group) p . 0.05, and vividness, F(2,29) = 0.65, p . 0.05) attributes be-
or the sad stories from the same competent speaker as tween the happy, neutral, and sad videos. Therefore, we
speaker-listener dyads (17 listeners in the sad group). gave more evidence that the minimal baseline differences
One competent speaker (female, 19 years of age) was were demonstrated between the three stimuli.
initially determined in a comprehension test. During this Each listener received two-story stimuli, one neutral
comprehension test, an independent sample of n = 10 and one happy or sad. The recall duration of each
participants (age: 22.1 6 2.2 years, eight females) were emotional spoken recall recording was the same. The
asked to watch the emotional videos and narrate each spoken recall recording of the happy story was 4 min,
video. The narrations were recorded and qualitatively as- comprising 400 words; the spoken recall recording of
sessed with the understanding of the stories and the ac- the sad story was 4 min, comprising 420 words; and
curacy of the emotion in the stories by three independent the spoken recall recording of the neutral story was
raters. Suggested items to consider were (1) understand- 4 min, comprising 400 words. Audio recordings were
ing of the stories, (2) expression of the episodes, (3) the obtained from each speaker who watched and re-
number of scenes remembered, (4) details provided, and counted the two videos (one neutral and one happy/
(5) the accuracy of emotion in the stories. They reported a sad) with EEG recording. The listener listened to two
score for each participant across the three raters. The corresponding audio recordings (Fig. 1B).
brain data for the selected speaker were manually in-
spected for quality, and the data from the other speakers
Procedures
are not further analyzed here (the rating sheet made is
The experimental procedures consisted of a resting-
provided in Table 1).
state phase and a task phase for both the speaker and the
All participants provided written informed consent. The
listener sessions. The speaker and the listener performed
study had full ethical approval by the University
their tasks separately. In all experimental stages, the neu-
Committee on Human Research Protection (UCHRP; HR
ral activity of the speaker and the listener was recorded
403–2019).
with EEG. During the resting-state phase (60 s), partici-
pants were instructed to relax while keeping their eyes
Stimuli closed without falling asleep and to avoid excessive head
The stimuli consisted of a total of three videos (happy motion. For each dyad, an initial resting-state session
video, sad video, and neutral video). The present study served as a baseline.
used three audiovisual movies, excerpts from the epi- The task phase included two main sessions. In the first
sodes of happy video (Hail the Judge, ;5 min in length), session (the speaker session), first, the speaker partici-
sad video (Root and Branches, ;7 min in length), and pants were asked to watch the happy, sad, and neutral
neutral video (natural scenes, ;6 min in length). These videos (Fig. 1A, Speaker Watching); second, the speakers
videos were chosen to have similar levels of production were asked to verbally narrate the stories in the videos
quality. Further, to assess the valence and the arousal of and their narrations were recorded (Fig. 1A, Speaker
three videos, 10 raters (age: 20.5 6 1.6 years, five females) Speaking). The speaker participants’ brain activity was re-
were asked to identify the emotional valence of the videos corded using EEG during speaking. In the second session
(happy, neutral, or sad) and their emotional arousal on a (the listener session), 32 listeners were invited to listen to
0–9 scale. Moreover, the 10 raters were required to rate the emotional (happy/sad) and neutral stories recordings
the amount of social content and vividness (scale ranging (The Listener Listening; see Fig. 1B), which from the com-
from 1 to 9) on separate nine-point Likert scales. The petent speaker who was chosen in the comprehension
raters reported a comparative evaluation of the arousal, the test (Fig. 1C), and recall the corresponding recordings
amount of social content, and vividness among the happy, (The Listener Recalling; see Fig. 1B).
neutral, and sad videos. Importantly, there were no significant To control the confounding effects of between-group
differences in some ways (emotional arousal, F(2,29) = 1.53, differences in mood, all listeners were required to report
p . 0.05, the amount of social content, F(2,29) = 1.32, their emotional state immediately before listening. The
Figure 1. Experimental design. A, Speaker design. The speaker was invited to watch an emotional video and shared the stories in
the video by narrating. B, Listener design. The listener was asked to listen to the story of the video through the speaker’s narration
and allowed to recall the story which the speaker shared. C, The task in The Speaker Speaking and The Listener Listening. The spe-
cific procedure of sharing stories.
happy group received happy stories from the competent channel system (Compumedics NeuroScan) with a sam-
speaker’s recording, whereas the sad group received pling rate of 1000 Hz. The electrode cap was positioned
sad stories from the competent speaker’s recording. following the standard 10–10 international system. Two
Moreover, sharing neutral stories served as a baseline vertical and two horizontal electrooculogram (EOG) were
for sharing emotional performance and therefore it was placed to detect eye-movement artifacts. Impedances
reasoned that this condition should be performed before were maintained below 10 kV.
the happy or sad condition. To determine the effect of
sharing emotional stories on interpersonal closeness,
corresponding indices were assessed by self-report Data analysis
scales before recalling (Fig. 1C). The inclusion of others Behavioral data analysis
in the self (IOS) scale is a single item, individuals are The quality of communication between speaker and lis-
asked to pick the pair of circles that best describes the tener was evaluated using The Listener Recalling stage
interpersonal relationship (Aron et al., 1992). IOS scale (see Fig. 1B), in which listeners were asked to recall every-
has good reliability and validity to assess the interperso- thing they remembered from the stories they heard.
nal closeness (Aron et al., 1992). Several lines of re- Quality of recall was assessed by three raters (following
search have proposed that IOS scale has good external the procedure in Zadbood et al., 2017). The raters first es-
validity to measure the interpersonal closeness (Simon tablished the rating system by which the quality is princi-
et al., 2015; Bentley et al., 2017). pally judged by the detail level of the scene and the
accuracy of the narration. Based on this system, they
then rated all three stories from the listener independently
EEG data acquisition on the same scale (from 0 to 30). The final quality score of
The neural activity of each participant was simultane- each story was determined by averaging the three raters’
ously recorded with an EEG recording system using a 64- scores on this story. Referred to the method of the similar
experimental design (Takahashi et al., 2004), the behav- electrode from participants 1 and 2 in a dyad, respec-
ioral index used the contrasts by subtracting the neutral tively, where one participant is the speaker and the other
condition from the happy condition and sad condition to is a listener. The PLV ranges from 0 to 1, where PLV
assess the specific condition effect. The primary behav- equals 1 if the two signals are perfectly synchronized and
ioral index “d recall quality” was computed in the follow- equals 0 if the two signals are unsynchronized. Phases
ing way: d recall quality = average recall quality in the were extracted using the Hilbert wavelet transform
emotional group (happy or sad) – the corresponding neu- (Schölkopf et al., 2001), and four frequency bands, u (4–7
tral recall quality. That is, the score of the neutral memory Hz), a (8–12 Hz), b (13–30 Hz), and g (31–48 Hz), were
served as a baseline, such that the final scores of emo- identified as typical frequency ranges in previous studies
tional memories were subtracted by the mean score of (Delaherche et al., 2015; Hu et al., 2018). u Band was ex-
the neutral memories. To evaluate the difference of the pected to observe strongest closeness-related IBS.
behavioral index in sharing quality between happy and Referred to the method of the similar experimental de-
sad groups, we conducted an independent-sample t test. sign (Takahashi et al., 2004), the neural index used the
Cronbach’s as of 0.91 for the happy video, 0.92 for the contrasts by subtracting the neutral condition from the
sad video, and 0.94 for the neutral video indicate high happy condition and sad condition to assess the specific
consistency between the raters. condition effect. Thus, the present study calculated a d
PLV value in the u band for each speaker-listener dyad
EEG data analysis using the equation “d vPLV = average u band PLV in the
The EEG raw data were preprocessed and analyzed emotional group (happy or sad) – the corresponding neu-
using the EEGLAB toolbox (version 14.1.0; Delorme and tral average u band PLV.” We conducted independent-
Makeig, 2004) and in-house scripts in MATLAB (R2014a, sample t tests (Happy vs Sad) on the IBS of speaker-lis-
The MathWorks). EEG data were filtered with a bandpass tener dyads to explore the difference of the IBS in shar-
ranging from 1 to 45 Hz and a notch filter at 50 Hz. Data ing stories between happy and sad groups. Differences
were re-referenced off-line to an average of the left and were considered significant using an electrode-pairs-
right mastoid and downsampled to 250 Hz. EEG data level threshold of p , 0.05 (Bonferroni-corrected). All
were divided into consecutive epochs of 1000 ms. Eye- PLV analyses focused on the sharing matchup (The
movement artifacts were removed with an independent Speaker Speaking-The Listener Listening), which repre-
component analysis (ICA) method (Makeig et al., 1996). sents sharing emotional stories between speaker and lis-
Signals containing EEG amplitudes greater than 675 mV tener (Ahn et al., 2018; Chen et al., 2020).
were excluded.
EEG data were grouped according to 6 regions for sub- Correlation between EEG data and behavioral data
sequent analysis: (1) frontal (F; AF4, F2, FP2, Fz, Afz, F1, To further explore whether the d value of the IBS was
FP1, AF3, F3, F5, F7, F8, F6, F4), (2) frontal-central (FC; strongly associated with sharing emotional stories, we
Fcz, FC1, FC3, C1, C3, C4, C2, FC4, FC2), (3) parietal (P;
examined the association between the behavioral (d
CP1, P5, P3, P1, Pz, P2, Cp2, P4, P6), (4) left temporopar-
recall quality) and neural index ( d PLV in The Speaker
ietal (left TP; FC5, FT7, C5, T7, TP7, CP5, P7), (5) right
Speaking-The Listener Listening).
temporoparietal (right TP; T6-P8, CP6, TP8, C6, T4-T8,
Moreover, we conducted moderation regression, spe-
FT8, FC6), and (6) occipital (O; PO3, O1, Poz, Oz, PO4,
cifically simple slopes analysis (Aiken and West, 1991), to
O2). Phase locking value (PLV) is a valid index in EEG
explore a moderation effect emotion -. PLV interperso-
brain-to-brain studies (Delaherche et al., 2015; Hu et al.,
nal closeness. First, we split the file layered by the valence
2018). PLV is a practical method for the direct quantifica-
of the emotion (happy/sad). Then, linear regression was
tion of frequency-specific synchronization (i.e., transient
used to compare whether b -coefficient was significant
phase-locking) between two neuroelectric signals and is
under the happy group or the sad group.
able to examine the role of neural synchronies as a puta-
tive mechanism for long-range neural integration during
cognitive tasks. However, compared with the more tradi- Coupling directionality
tional method of spectral coherence, PLV separates the To estimate the information flow between the speaker
phase and amplitude components and can be directly in- and the listener during the Speaker Speaking-Listener
terpreted in the framework of neural integration (Lachaux Listening matchup, a Granger causality (G-causality)
et al., 1999). Thus, the subsequent data were submitted analysis was conducted. According to Granger theory
to an IBS analysis known as PLV (Lachaux et al., 1999; (Granger, 1969), for two given time series X and Y, if
Czeszumski et al., 2020). PLV was computed for each the variance of the prediction error for the time series Y
pair (i, k) of electrodes for each frequency band according at the current time is reduced by including historical in-
to the following formula: formation from the time series X in the vector autore-
N gressive model, then the changes in X can be
X1
jð w iðtÞ w kðtÞÞ understood to “cause” the changes in Y. The MVGC
PLVi;k ¼ N exp ;
t¼1
MATLAB toolbox (Barnett and Seth, 2014) was used to
estimate the full multivariate conditional G-causality.
where N represents the number of trials, w is the phase, | | The task-related data for each participant were z-
represents the complex modulus, and i and k indicate the scored before G-causality analysis based on the mean
Figure 2. Behavioral results in the happy and sad group. A, The recall quality (d value) is shown for happy (Happy-Neutral) and sad
(Sad-Neutral) emotions. B, The rating of interpersonal closeness is shown for happy (Happy-Neutral) and sad (Sad-Neutral) groups;
**p , 0.01, ***p , 0.001.
Figure 3. PLV of different groups [happy (Happy-Neutral) and sad (Sad-Neutral)] in sharing matchup (The Speaker Speaking-The
Listener Listening). A, The PLV in the u band at the F site. B, The PLV in the u band at the F site. All after FDR correction;
**p , 0.01.
Fig. 3A) and the left TP site (t(30) = 3.87, p = 0.001, Coupling directionality
Cohen’s d = 1.36, FDR corrected; Fig. 3B) at the u band. The G-causality analysis was used to measure the di-
No significant results were found in others regions (see rectional information flow (i.e., speaker -. listener). The
details in Table 2). Previous research has widely sug- one sample t test showed that there was a significant dif-
gested that the frontal area is related to emotional com- ference between 0 and the G-causality of speaker -. lis-
munication and language-based interaction (Ahn et al., tener in The Speaker Speaking-The Listener Listening
2018). Moreover, the left TP is related to the high-level matchup (t(31) = 13.35, p , 0.001). To sum up, the result in-
metallization during communications (Samson et al., dicated that the information could only be transmitted
2004). Therefore, our further analysis would focus on the from the speaker to the listener and the speaker is a sig-
PLV in the left TP. nificant predictor of future values of the listener.
Table 2: PLV at different regions and in the h band between happy and sad group (FDR corrected)
ROIs PLV of the happy group (mean 6 SD) PLV of the sad group (mean 6 SD) t d df p (corrected)
F 0.32 6 0.10 0.21 6 0.09 3.22 1.14 30 0.009
FC 0.47 6 0.09 0.45 6 0.15 0.47 0.17 30 0.960
P 0.35 6 0.09 0.32 6 0.11 0.81 0.29 30 0.854
Right TP 0.01 6 0.05 0.00 6 0.09 0.39 0.14 30 0.839
Left TP 0.39 6 0.10 0.27 6 0.08 3.87 1.36 30 0.006
O 0.02 6 0.10 0.01 6 0.05 0.94 0.11 30 0.752
F, frontal site; FC, frontal-central site; P, parietal site; right TP, right temporoparietal site; left TP, left temporoparietal site; O, occipital site.
Figure 4. Correlation between behavioral results and frontal IBS. A, Correlation between the recall quality (d value) and the PLV in
the u band (d value). The frontal PLV in the u band positively correlated with the recall quality of dyads. B, We found the left TP PLV
in the u band was not significantly correlated with recall quality of dyads. C, Correlation between interpersonal closeness and the
PLV in the u band. The frontal PLV in the u band positively correlated with interpersonal closeness of dyads. D, The left TP PLV in
the u band was nonsignificant correlated with interpersonal closeness of dyads; *p , 0.05, ***p , 0.001.
present study revealed that the behavioral index (the quality states (Konvalinka et al., 2010) during emotional expres-
of recall after hearing an emotional story) was positively as- sion, our results suggested that sharing happy and sad
sociated with interpersonal closeness in both happy and stories can facilitate interpersonal closeness. Consistent
sad groups. Compared with the sad group, the happy group with previous studies, we found that sharing happy stories
showed better recall quality and reported higher interperso- was more likely to be transferred and received than shar-
nal closeness. Moreover, higher task-related IBS was found ing sad stories (Piotroski et al., 2015), we also found that
for the happy group. Within the happy group, sharing stories the happy group showed a better recall quality compared
moderated the IBS and thus promoted interpersonal close- with the sad group and led to higher interpersonal close-
ness. Finally, the F site IBS can be used to predict interper- ness. Moreover, initial behavioral studies have shown that
sonal closeness. The aforementioned results are discussed participants preferred positive experiences on social
in detail as follows. media (Gable et al., 2004; Dorethy et al., 2014; Pounders
Our results showed a positive association between et al., 2016), and thus sharing happy stories may repre-
sharing emotional stories and interpersonal closeness at sent a positive image and be good for building interperso-
the behavioral level. Further, in the light of EASI, humans nal relationships with strangers (Birnbaum et al., 2020).
tend to synchronize with each other’s behavior (McCarthy Examining the cognitive and neural processes involved
and Duck, 1976; Dubois et al., 2016) and physiological in social interaction behaviors hinges on investigating
Figure 5. Frontal IBS in the u band can effectively predict interpersonal closeness. A, Regression predicted by SVR between real
value and predict value. B, The r value is calculated as a metric of prediction interpersonal closeness. The significance level (thresh-
old at p , 0.05) is calculated by comparing the r value from the correct labels (dotted line) with 10,000 randomization samples with
shuffled labels (blue bars); ***p , 0.001.
brain-to-brain synchronization during social interaction. (Dubois et al., 2016). Individuals prefer to share positive
“Two-person neuroscience” in sharing stories has higher self-related details (i.e., happy stories about the videos
ecological validity than single-brain recoding because you watched) in the presence of strangers and they also
“two-person neuroscience” is closer to the real-life inter- wish to transfer ideal images (Tamir and Mitchell, 2012;
actions (García and Ibáñez, 2014; Joy et al., 2018; Baek et al., 2017). On the neural level, our result further
Redcay and Schilbach, 2019). Moreover, brain-to-brain evaluated the key role of different emotions (i.e., happy
studies have been widely accepted to unveil the interper- and sad) in the mediation effect of sharing stories on inter-
sonal neural correlates in the context of social interactions personal closeness. To sum up, our results supported
(Lu et al., 2019; Chen et al., 2020). Based on the neuroi- that sharing happy stories is more helpful in enhancing
maging studies, sharing stories may be inherently re- speaker-listener interaction.
flected on the neural level (Tamir and Mitchell, 2012; Our GCA results further showed that there was a signifi-
Berger, 2014), and comprehension of narrations was cant directionality of the enhanced IBS between the
driven by the neural similarity between the speaker and speaker and the listener, implying that the speaker was a
the listener (Silbert et al., 2014; Nguyen et al., 2019). A significant predictor of future values of the listener above
similar understanding of stories during interpersonal inter- and beyond past values of the listener. Our findings were
action led to enhanced IBS which represented the higher consistent with previous studies in unilateral communica-
neural similarity of speaker-listener dyads (Hasson et al., tion or unilateral sharing, the speaker owns more informa-
2012; Jiang et al., 2015; Nozawa et al., 2016; Chen et al., tion than the listener (Tworzydło, 2016). Based on the
2020). Therefore, the present study used brain-to-brain verbal cues of the speaker, the listener would frame the
recording to evaluate the dynamic neural interaction be- information, fill in the content, and adjust the content dur-
ing the dynamic interactive process (Chen et al., 2017;
tween the speaker and the listener, revealing a brain-to-
Zadbood et al., 2017; Nguyen et al., 2019). In line with
brain interaction pattern in the process of sharing stories
previous findings, listeners would perform as followers
(the sharing matchup). Referred to previous emotional
during sharing emotional stories and this performance is
communication brain-to-brain studies and audio brain-to-
influenced by the speaker (Stephens et al., 2010; Jiang et
brain studies (Smirnov et al., 2019; Hou et al., 2020), the al., 2015; Bohan et al., 2018). Therefore, the directionality
present study was not real-time interaction. Consistent of IBS in our study highlighted the point that sharing emo-
with previous studies (Tamir et al., 2015; Kulahci and tional stories were dominated by the speaker.
Quinn, 2019), our findings suggested that high IBS levels Our results revealed the predictive effect of frontal IBS
represented a high-level story comprehension of sharing for interpersonal closeness through SVR. These findings
stories and that was essential to increase interpersonal were in line with recent studies revealing that synchron-
closeness between individuals. ized brain activity served as a reliable neural-classification
We found significant IBS between speaker and listener feature (Cohen et al., 2018; Hou et al., 2020; Pan et al.,
in the frontal cortex during interaction in the u band, con- 2020). Moreover, a growing number of studies have used
sistent with previous studies which indicated that u band the combination of machine learning and the IBS mea-
was associated with emotion and memory encoding surement in social neuroscience, so we considered more
(Klimesch et al., 1996; Ding, 2010; Symons et al., 2016). features, such as the time-frequency neural features from
Previous brain-to-brain studies have found strong inter- single-trial event-related spectral perturbation (ERSP)
personal neural synchronization in the frontal cortex using patterns (Makeig et al., 2004; Chung et al., 2015).
the interactive paradigm involving verbal communication The present study had several limitations. First of all,
(Ahn et al., 2018; Bohan et al., 2018). Moreover, prior the present study focused on specific happy and sad vid-
studies have uncovered that the frontal cortex critically eos. Although the present study demonstrated that there
contributes to recognizing emotions and encoding infor- is no difference between videos except for their valence,
mation (Abrams et al., 2011; De Borst et al., 2016). so the different effects on sharing stories are not driven by
Therefore, our finding is consistent with previous findings, other variables related to the stories (e.g., recall duration,
demonstrating that the frontal cortex was correlated with vividness, social context, etc.), the generalizability of the
establishing a frame of emotional information. present results to more emotional videos needs to be ex-
Our results indicated the valence of emotion played a amined in future studies. Second, spatial resolution is re-
moderating role between IBS and interpersonal close- stricted in EEG, which is distributed on the skull and scalp
ness. A recent study has shown that the neural synchroni- (Hedrich et al., 2017), limiting measurements to specific
zation between the speaker and the listener was areas during sharing emotional stories of speaker-listener
associated with emotional features of stories and that the dyads. Our findings indicated that frontal and temporal cor-
neural synchronization created a tacit understanding be- tices were important in sharing emotional stories. Although
tween the speaker and the listener, facilitating communi- the ventromedial PFC (VMPFC) and anterior cingulate cortex
cation and improving interpersonal relationships (Smirnov (ACC) play a crucial role in sharing emotional information ac-
et al., 2019). It is worth noting that only the happy emotion tivities (Killgore et al., 2013), EEG is unable to measure these
(relative to neutral) played a moderating role in enhancing two areas. Finally, the exploratory SVR predictive analysis is
IBS. Although the theory of EASI proposes that emotional constrained by relative sample size (although our sample
expression will increase mutual understanding between size is similar to those reported in previous classification
individuals, compared with positive emotional expression, and prediction analyses based on brain-to-brain coupling
the effect of negative emotional expression is subtle data; Jiang et al., 2012; Dai et al., 2018; Pan et al., 2020).
Future replications are encouraged to consolidate the cur- learning performance for individual students. Neurobiol Learn
rent findings by increasing both the sample size and the Mem 155:60–64.
number of testing blocks. Czeszumski A, Eustergerling S, Lang A, Menrath D, Gerstenberger
M, Schuberth S, Schreiber F, Rendon ZZ, König P (2020)
In conclusion, the present study showed that sharing Hyperscanning: a valid method to study neural inter-brain under-
both happy and sad stories could increase interpersonal pinnings of social interaction. Front Hum Neurosci 14:39.
closeness between individuals. Moreover, findings at the Dai B, Chen C, Long Y, Zheng L, Zhao H, Bai X, Liu W, Zhang Y, Liu
neural level suggested that only sharing happy stories L, Guo T, Ding G, Lu C (2018) Neural mechanisms for selectively
moderated the frontal IBS and thus promoted interperso- tuning into the target speaker in a naturalistic noisy situation. Nat
nal closeness. These insights contribute to a deeper Commun 9:2405.
De Borst AW, Valente G, Jääskeläinen IP, Tikka P (2016) Brain-
understanding of the neural correlates of sharing different based decoding of mentally imagined film clips and sounds re-
emotional stories with interpersonal closeness. Future re- veals experience-based information patterns in film professio-
search may explore the neural mechanism of sharing sto- nals. Neuroimage 129:428–438.
ries by using IBS as an effective neural indicator. Delaherche E, Dumas G, Nadel J, Chetouani M (2015) Automatic
measure of imitation during social interaction: a behavioral and hy-
perscanning-EEG benchmark. Pattern Recogn Lett 66:118–126.
References Delorme A, Makeig S (2004) EEGLAB: an open source toolbox for
analysis of single-trial EEG dynamics including independent com-
Abrams DA, Bhatara A, Ryali S, Balaban E, Levitin DJ, Menon V
ponent analysis. J Neurosci Methods 134:9–21.
(2011) Decoding temporal structure in music and speech relies on
Dikker S, Silbert LJ, Hasson U, Zevin JD (2014) On the same wave-
shared brain resources but elicits different fine-scale spatial pat- length: predictable language enhances speaker-listener brain-to-
terns. Cereb Cortex 21:1507–1518. brain synchrony in posterior superior temporal gyrus. J Neurosci
Ahn S, Cho H, Kwon M, Kim K, Kwon H, Kim BS, Chang WS, Chang 34:6267–6272.
JW, Jun SC (2018) Interbrain phase synchronization during turn- Ding M (2010) Theta Oscillations Mediate Interaction between
taking verbal interaction—a hyperscanning study using simultane- Prefrontal Cortex and Medial Temporal Lobe in Human Memory.
ous EEG/MEG. Hum Brain Mapp 39:171–188. Cereb Cortex 20:1604–1612.
Aiken LS, West SG (1991) Multiple regression: testing and interpret- Dorethy MD, Fiebert MS, Warren CR (2014) Examining social net-
ing interactions. Newbury Park: Sage. working site behaviors: photo sharing and impression manage-
Anderson DJ, Adolphs R (2014) A framework for studying emotions ment on Facebook. Int Rev Soc Sci Human 6:111–116.
across species. Cell 157:187–200. Dubois D, Bonezzi A, De Angelis M (2014) Positive with strangers,
Aron A, Aron EN, Smollan D (1992) Inclusion of other in the self scale negative with friends: how interpersonal closeness affect word-of-
and the structure of interpersonal closeness. J Pers Soc Psychol mouth valence through self-construal. ACR N Am Adv 4:41–46.
63:596–612. Dubois D, Bonezzi A, De Angelis M (2016) Sharing with friends versus
Baek EC, Scholz C, O’Donnell MB, Falk EB (2017) The value of shar- strangers: how interpersonal closeness influences word-of-mouth
ing information: a neural account of information transmission. valence. J Market Res 53:712–727.
Psychol Sci 28:851–880. Fessler DMT, Pisor AC, Navarrete CD (2015) Negatively-biased cre-
Barnett L, Seth AK (2014) The MVGC multivariate Granger causality dulity and the cultural evolution of beliefs. PLoS One 9:e95167.
toolbox: a new approach to Granger-causal inference. J Neurosci Fredrickson BL (2001) The role of positive emotions in positive psy-
Methods 223:50–68. chology. The broaden-and-build theory of positive emotions. Am
Baumeister RF, Bratslavsky E, Finkenauer C, Vohs KD (2001) Bad is Psychol 56:218–226.
stronger than good. Rev Gen Psychol 5:323–370. Gable SL, Reis HT, Impett EA, Asher ER (2004) What do you do when
Bentley SV, Greenaway KH, Haslam SA (2017) Cognition in context: things go right? The intrapersonal and interpersonal benefits of
social inclusion attenuates the psychological boundary between sharing positive events. J Pers Soc Psychol 87:228–245.
self and other. J Exp Soc Psychol 73:42–49. García AM, Ibáñez A (2014) Two-person neuroscience and naturalis-
Berger J (2014) Word of mouth and interpersonal communication: a tic social communication: the role of language and linguistic varia-
review and directions for future research. J Consumer Psychol bles in brain-coupling research. Front Psychiatry 5:124.
24:586–607. Gillath O, Bunge SA, Shaver PR, Wendelken C, Mikulincer M (2005)
Attachment-style differences in the ability to suppress negative
Birnbaum GE, Iluz M, Reis HT (2020) Making the right first impres-
thoughts: exploring the neural correlates. Neuroimage 28:835–
sion: sexual priming encourages attitude change and self-presen-
847.
tation lies during encounters with potential partners. J Exp Soc
Granger CWJ (1969) Investigating causal relations by econometric
Psychol 86:103904.
models and cross-spectral methods. Econometrica 37:424–438.
Bohan D, Chuansheng C, Yuhang L, Lifen Z, Hui Z, Xialu B, Wenda L,
Hanley N, Boyce C, Czajkowski M, Tucker S, Noussair C, Townsend
Yuxuan Z, Li L, Taomei G, Guosheng D, Chunming L (2018) Neural
M (2017) Sad or happy? the effects of emotions on stated prefer-
mechanisms for selectively tuning into the target speaker in a natural-
ences for environmental goods. Environ Resour Econ 68:821–846.
istic noisy situation. Nat Commun 9:1–12. Hari R, Henriksson L, Malinen S, Parkkonen L (2015) Centrality of so-
Chen J, Leong YC, Honey CJ, Yong CH, Norman KA, Hasson U cial interaction in human brain function. Neuron 88:181–193.
(2017) Shared memories reveal a shared structure in neural activity Hasson U, Ghazanfar AA, Galantucci B, Garrod S, Keysers C (2012)
across individuals. Nat Neurosci 20:115–125. Brain-to-brain coupling: a mechanism for creating and sharing a
Chen M, Zhang T, Zhang R, Wang N, Yin Q, Li Y, Liu J, Liu T, Li X (2020) social world. Trends Cogn Sci 16:114–121.
Neural alignment during face-to-face spontaneous deception: does Hedrich T, Pellegrino G, Kobayashi E, Lina JM, Grova C (2017)
gender make a difference? Hum Brain Mapp 41:4964–4981. Comparison of the spatial resolution of source imaging techniques
Chih-Chung C, Chih-Jen L (2011) LIBSVM: a library for support vec- in high-density EEG and MEG. Neuroimage 157:531–544.
tor machines. ACM Trans Intell Syst Technol 2:1–27. Hirst W, Echterhoff G (2012) Remembering in conversations: the so-
Chung D, Yun K, Jeong J (2015) Decoding covert motivations of free cial sharing and reshaping of memories. Annu Rev Psychol 63:55–
riding and cooperation from multi-feature pattern analysis of EEG 79.
signals. Soc Cogn Affect Neurosci 10:1210–1218. Hou Y, Song B, Hu Y, Pan Y, Hu Y (2020) The averaged inter-brain
Cohen SS, Jens M, Gad T, Denise R, Stella FAL, Simon H, Parra LC coherence between the audience and a violinist predicts the popu-
(2018) Neural engagement with online educational videos predicts larity of violin performance. Neuroimage 211:116655.
Hu Y, Pan Y, Shi X, Cai Q, Li X, Cheng X (2018) Inter-brain synchrony Pounders K, Kowalczyk Christine M, Stowers K (2016) Insight into
and cooperation context in interactive decision making. Biol the motivation of selfie postings: impression management and
Psychol 133:54–62. self-esteem. EJM 50:1879–1892.
Isgett SF, Fredrickson BL (2015) Broaden-and-build theory of positive Ranzini G, Hoek E (2017) To you who (I think) are listening: imaginary
emotions. In: International encyclopedia of the social and behavioral audience and impression management on Facebook. Comput
sciences, Ed 2 (Wright JD, ed), pp 864–869. Oxford: Elsevier. Hum Behav 75:228–235.
Jiang J, Dai B, Peng D, Zhu C, Liu L, Lu C (2012) Neural synchronization Redcay E, Schilbach L (2019) Using second-person neuroscience to
during face-to-face communication. J Neurosci 32:16064–16069. elucidate the mechanisms of social interaction. Nat Rev Neurosci
Jiang J, Chen C, Dai B, Shi G, Ding G, Liu L, Lu C (2015) Leader 20:495–505.
emergence through interpersonal neural synchronization. Proc Ribeiro FS, Santos FH, Albuquerque PB, Oliveira-Silva P (2019)
Natl Acad Sci USA 112:4274–4279. Emotional induction through music: measuring cardiac and elec-
Johnson BK, Ranzini G (2018) Click here to look clever: self-presen- trodermal responses of emotional states and their persistence.
tation via selective sharing of music and film on social media. Front Psychol 10:451.
Comput Hum Behav 82:148–158. Rozin P, Royzman EB (2001) Negativity bias, negativity dominance,
Joy H, Adam NJ, Zhang X, Swethasri D, Yumie O (2018) A cross- and contagion. Pers Soc Psychol Rev 5:296–320.
brain neural mechanism for human-to-human verbal communica- Samson D, Apperly IA, Chiavarino C, Humphreys GW (2004) Left
tion. Soc Cogn Affect Neurosci 13:907–320. temporoparietal junction is necessary for representing someone
Killgore WDS, Schwab ZJ, Tkachenko O, Webb CA, DelDonno SR, else’s belief. Nat Neurosci 7:499–500.
Kipman M, Rauch SL, Weber M (2013) Emotional intelligence cor- Schölkopf B, Platt JC, Shawe-Taylor J, Smola AJ, Williamson RC
relates with functional responses to dynamic changes in facial (2001) Estimating the support of a high-dimensional distribution.
trustworthiness. Soc Neurosci 8:334–346. Neural Comput 13:1443–1471.
Klimesch W, Doppelmayr M, Russegger H, Pachinger T (1996) Theta Shoham M, Moldovan S, Steinhart Y (2016) Positively useless: irrele-
band power in the human scalp EEG and the encoding of new in- vant negative information enhances positive impressions. J
formation. Neuroreport 7:1235–1240. Consumer Psychol 147–159.
Konvalinka I, Vuust P, Roepstorff A, Frith CD (2010) Follow you, fol- Silbert LJ, Honey CJ, Simony E, Poeppel D, Hasson U (2014)
low me: continuous mutual prediction and adaptation in joint tap- Coupled neural systems underlie the production and comprehen-
ping. Q J Exp Psychol 63:2220–2230. sion of naturalistic narrative speech. Proc Natl Acad Sci USA 111:
Kosinski M, Stillwell D, Graepel T (2013) Private traits and attributes E4687–E4696.
are predictable from digital records of human behavior. Proc Natl Simon G, Chris S, Fabio T, Mariapaz E (2015) Measuring the close-
Acad Sci USA 110:5802–5805. ness of relationships: a comprehensive evaluation of the ‘inclusion
Kulahci IG, Quinn JL (2019) Dynamic relationships between informa- of the other in the self’ scale. PLoS One 10:e0129478.
tion transmission and social connections. Trends Ecol Evol Smirnov D, Saarimäki H, Glerean E, Hari R, Sams M, Nummenmaa L
34:545–554. (2019) Emotions amplify speaker-listener neural alignment. Hum
Lachaux JP, Rodriguez E, Martinerie J, Varela FJ (1999) Measuring Brain Mapp 40:4777–4788.
phase synchrony in brain signals. Hum Brain Mapp 8:194–208. Stephens GJ, Silbert LJ, Hasson U (2010) Speaker–listener neural
Lu K, Xue H, Nozawa T, Hao N (2019) Cooperation makes a group be coupling underlies successful communication. Proc Natl Acad Sci
more creative. Cereb Cortex 29:3457–3470. USA 107:14425–14430.
Makeig S, Bell AJ, Jung TP, Sejnowski TJ (1996) Independent com- Symons AE, Wael ED, Michael S, Kotz SA (2016) The functional role
ponent analysis of electroencephalographic data. Adv Neural Inf of neural oscillations in non-verbal emotional communication.
Process Syst 145–151. Front Hum Neurosci 10:239–253.
Makeig S, Debener S, Onton J, Delorme A (2004) Mining event-re- Takahashi H, Koeda M, Oda K, Matsuda T, Matsushima E, Matsuura
lated brain dynamics. Trends Cogn Sci 8:204–210. M, Asai K, Okubo Y (2004) An fMRI study of differential neural re-
Maswood R, Rajaram S (2019) Social transmission of false memory sponse to affective pictures in schizophrenia. Neuroimage
in small groups and large networks. Top Cogn Sci 11:687–709. 22:1247–1254.
McCarthy B, Duck SW (1976) Friendship duration and responses to Tamir DI, Mitchell JP (2012) Disclosing information about the self is
attitudinal agreement-disagreement. Br J Soc Clin Psychol intrinsically rewarding. Proc Natl Acad Sci USA 109:8038–8043.
15:377–386. Tamir DI, Zaki J, Mitchell JP (2015) Informing others is associated
Nguyen M, Vanderwal T, Hasson U (2019) Shared understanding of with behavioral and neural signatures of value. J Exp Psychol Gen
narratives is correlated with shared neural responses. Neuroimage 144:1114–1123.
184:161–170. Tworzydło D (2016) Public relations —the tools for unilateral commu-
Niedenthal PM, Setterlund MB (1994) Emotion congruence in per- nication and dialogue on the internet. Market Sci Res Organiz
ception. Pers Soc Psychol Bull 20:401–411. 20:79–90.
Nozawa T, Sasaki Y, Sakaki K, Yokoyama R, Kawashima R (2016) Vaish A, Grossmann T, Woodward A (2008) Not all emotions are cre-
Interpersonal frontopolar neural synchronization in group commu- ated equal: the negativity bias in social-emotional development.
nication: an exploration toward fNIRS hyperscanning of natural in- Psychol Bull 134:383–403.
teractions. Neuroimage 133:484–497. Van Kleef GA (2009) How emotions regulate social life. Curr Dir
Nummenmaa L, Glerean E, Viinikainen M, Jääskeläinen IP, Hari R, Psychol Sci 18:184–188.
Sams M (2012) Emotions promote social interaction by synchro- Willems RM, Nastase SA, Milivojevic B (2020) Narratives for neuro-
nizing brain activity across individuals. Proc Natl Acad Sci USA science. Trends Neurosci 43:271–273.
109:9599–9604. Yan A, Wang Z, Cai Z (2008) Prediction of human intestinal absorp-
Pan Y, Dikker S, Goldstein P, Zhu Y, Yang C, Hu Y (2020) Instructor- tion by GA feature selection and support vector machine regres-
learner brain coupling discriminates between instructional ap- sion. Int J Mol Sci 9:1961–1976.
proaches and predicts learning. Neuroimage 211:116657. Zadbood A, Chen J, Leong YC, Norman KA, Hasson U (2017) How
Pickering MJ, Garrod S (2013) An integrated theory of language pro- we transmit memories to other brains: constructing shared neural
duction and comprehension. Behav Brain Sci 36:329–347. representations via communication. Cereb Cortex 27:4988–5000.
Piotroski JD, Wong TJ, Zhang T (2015) Political incentives to sup- Zheng C, Quan M, Zhang T (2012) Decreased connectivity by altera-
press negative information: evidence from Chinese listed firms. J tion of neural information flow in theta oscillation in depression-
Account Res 53:405–459. model rats. J Comput Neurosci 33:547–558.