Differences in Empathy According to Nonverbal Expression Elements of Emojis: Focusing on the Humanoid Emojis of KakaoTalk

To identify the most effective type of emojis for inducing empathy, the nonverbal expression factors of emojis that generate empathy differences were categorized as body language types (the presence of movement and contextual information), emotion type (joy and sadness), and degree of bodily expression (upper body and whole body). After dividing the data into joyful and sad emotion groups, differences in empathy according to the body language types and degree of bodily expression of emojis were confirmed. As a result, in the sad emotions group, empathy was higher in the movement type and the type combining movement and contextual information than the static body language type and the contextual information type without movement. However, the difference in empathy according to the degree of body expression and the interaction effect between body language types and degree of body expression were not significant. On the other hand, in the joyful emotions group, neither the main effect nor the interaction effect was significant. These results indicate that the effective emoji types for inducing empathy are the upper body of the movement type and the upper body combined with movement and contextual information. These types are also considered to work more effectively when applied to emotions with low mirroring and emotion recognition rates, such as sad emotion.

body is depicted. Among these elements, movement and contextual information can specifically affect empathy toward the sender.

Emojis and Mirroring
Previous studies have reported that text-type emoticons and graphic emojis are processed similarly to the in-person perception of facial expressions (Gantiva et al., 2019) and that emotional emoji assessment is an automatic process through potentiality (Comesaña et al., 2013). Furthermore, O'Neil (2013) reported that participants viewing text-type emoticons exhibit face imitation mirroring. This mirroring process is reflexive and involves decoding contextual and interpersonal signals that form the basis of understanding another person's emotions (Fogassi et al., 2005). Additionally, mirroring encompasses a critical empathy mechanism whereby observing others' behaviors is associated with a motor activity that enables behavioral representation in the brain (Rymarczyk et al., 2016). In this context, Walter's (2012) work on the theory of mind and empathy shows that empathy can be developed through simulation processes that are based on mirroring and mentalization, whereby implicit signals and information about others' emotional expressions can activate the observer's neural network. Consequently, emoji information processing may be considered as a simulation process of facial expressions and body language. Therefore, in this simulation process, movement and contextual information, which are contagious signals that contain semantic information, are key factors to consider when designing emojis (Jeon, 2020).

Movement and Contextual Information
According to previous studies, movement is a key element for reinforcing mirroring. For example, many studies have reported prompter and higher facial muscle activation by the mirror neuron system when observing corresponding dynamic, rather than static, facial expressions (Rymarczyk et al., 2016;Sato et al., 2008;Sato & Yoshikawa, 2007;Weyers et al., 2006). These motor responses are simulations of human motor behavior that occur independently of emotional meaning (Borgomaneri et al., 2012). In particular, the embodied simulation theory proposes that an observer can understand another person's emotional state by embodying their emotional expression (Gallese & Sinigaglia, 2011). In practice, the mirroring of emotional expressions has been associated with the ability to experience another's emotions (Dimberg & Thunberg, 2012) and trigger emotional contagion, which underlies emotional empathy (Sonnby-Borgström, 2002).
Another study reported a causal relationship between facial representation and emotional expression processing (Niedenthal et al., 2001). Furthermore, since body movements are associated with emotional arousal (the degree of activation), they can facilitate emotion recognition and intensity judgments (Pollick et al., 2001;Wallbott, 1998). This could be because the physical movements caused by emotions express the characteristics of emotions through speed, openness, smoothness (Boone & Cunningham, 2001;Lee & Nam, 2007), and direction (Krumhuber et al., 2013). For this reason, it has been suggested that the body's motion signals may be sufficient to recognize basic emotions (Atkinson et al., 2004;Melzer et al., 2019). Therefore, the movement of emojis could positively affect empathizing with another person's emotions by activating the mirror neuron system, thus allowing the observer to experience emotions similar to those of the sender and increasing emotion recognition.
Conversely, since contextual information of emojis corresponds to knowledge and information, it is an element that must be interpreted by accessing long-term memory. This means contextual information can interfere with the unconscious and automatic mirroring response. Specifically, emoji attributes such as faces are automatically processed; indeed, previous studies have shown that when facial expressions are iconized (diagrammed), the differential perception of emotional expressions increases (Kendall et al., 2016). As such, since facial information processing is affected by the availability of recognition (MacNamara et al., 2011), too much information regarding behavioral patterns can overload cognitive processing (Millar & Millar, 1995). Therefore, the contextual information of emojis could be expected to negatively affect mirroring and automatic emoji processing.
However, empathy is not a simple somatic response but an emotional response that relies on cognition. Goldman (2008Goldman ( , 2006 explained that low-level mirroring automatically generated by facial expressions can be expanded to high-level simulations by knowledge and information. During this high-level simulation processing, the contextualization of emotion mirroring can lead to richer emotional knowledge (De Vignemont, 2009). Therefore, the contextual information of emojis is highly likely to promote empathy by expanding low-level mirroring to high-level simulations. Specifically, contextual information that provides a rich story through symbolic graphics in this simulation process could positively affect empathy by inducing assumptions about the perspective of others and promoting personification to simulate other people's emotions (Fini et al., 2015).
In addition, contextual information can help increase the rate of emotion recognition (Theurel et al., 2016) because it helps to clarify signals (Aviezer et al., 2008) by reducing confusion between emotion types that have a strong physical similarity. Therefore, when this kind of contextual information is combined with the movements that underlie emotion recognition, there may be a more positive effect on empathy as emotion recognition increases and a richer narrative is generated.

Emotion Type and the Degree of Bodily Expression
Previous studies have shown that mirroring is influenced by emotion types that are characterized by emotional arousal and valence. In this regard, there have been reports that the level of arousal, especially for pleasant emotions (happiness, pleasure), affects facial imitation (Fujimura et al., 2010;Greenwald et al., 1989;Witvliet & Vrana, 1995). In other words, as the intensity (arousal) of the emotion with a pleasant valence increased, activity in the zygomatic facial muscles (which is typical in happy facial expressions) increased. Thus, mirroring is thought to be sensitive to emotional arousal as well as movement since these are governed by somatic mirror neurons. However, these studies found no correlation between corrugator facial muscle activity (typical in angry facial expressions) and arousal level for unpleasant expressions. This shows that mirroring may be affected by valence as well as emotional arousal.
Indeed, prior studies have reported that despite the automaticity of emotional imitation, it also reflects the valence of the emotion (Eisenbarth et al., 2011;Neumann et al., 2005) and can be controlled by its valence (Seibt et al., 2015). For example, Leighton et al. (2010) found that the auto-mimicking effect occurs more in response to pro-social words than anti-social words. Additionally, in everyday life, people imitate smiles more than they do frowns (Hinsz & Tomhave, 1991). This is probably not because a smile confers higher arousal than a frown but because it has a pleasant valence and is associated with pro-social emotions. Therefore, considering these previous studies, we can propose that emotions, such as joy, with high arousal and a pleasant valence, can reinforce empathy by generating a greater degree of mirroring than emotions, such as sadness, with low arousal, and an unpleasant valence.
In addition, emotion recognition can be influenced by emotion types, especially emotional arousal (the activation dimension). For example, joy, which confers higher emotional arousal than sadness, is also easier to recognize as a result of its greater openness and intensity of physical movements (Lee & Nam, 2007). Indeed, it has been shown that joyful emotions result in a higher rate of emotion recognition than sad emotions (Alves et al., 2009;Garcia & Tully, 2020).
Furthermore, the degree of bodily expression, which determines the range of expression of emotional information, can influence the mirroring and recognition of emotions. Specifically, since bodily expression can embody emotions effectively (Flack et al., 1999) due to physical arousal, there may be a stronger mirroring as the degree of bodily expression associated with facial expressions increases. In addition, while we primarily recognize emotions through facial expressions, physical postures, and gestures, in which emotions are also expressed, they also act as clues to recognize emotions. Thus, bodily expression can play a positive role in emotion recognition (Aviezer et al., 2012;Coulson, 2004;Ekman & Friesen, 1967, 1969; in particular, body expressions can provide more information than faces when distinguishing between fear and anger, or between fear and happiness (Meeren et al., 2005;Van den Stock et al., 2007). For this reason, the combination of facial expressions, gestures, and postures makes it easier and faster to recognize emotions than when relying on only one body expression (Flack et al., 1999;Gunes & Piccardi, 2007). Moreover, bodily expression is positively associated with the richness of emotional knowledge of the sender and, therefore, enhances the sense of reality and narrative. Thus the whole body, including the face, may have a more positive effect on empathy than the upper body plus the face, and the face alone.
As we have seen so far, empathy can be affected by differences in not only the presence of movement and contextual information in emojis but also the emotion type and the degree of bodily expression. However, to design emojis that increase empathy, we need to specifically identify the types of emojis that effectively induce empathy. To accomplish this, we determined that it was necessary to divide the emojis into positive (joy) and negative emotions (sadness) groups and evaluate differences in empathy according to the presence of movement and contextual information and the degree of body expression in each group.
In other words, since emojis provide visual information representing the sender's emotions, the study must be designed to evaluate each emotion type. Additionally, when the sender applies emojis, the first selection criterion is the emotion type that represents their emotions. Therefore, it is necessary to identify the emoji types that effectively induce empathy by dividing them according to emotion type. Additionally, when the sender selects an emoji from the same emotion type as their own emotion, the selected emoji includes the presence of movement and contextual information as well as the degree of body expression. However, considering that degree of bodily expression is a factor that determines the range of expression of movement and contextual information and that the emotional processing of emojis is automatic (Comesaña et al., 2013), the complexity of information due to the presence of movement and contextual information according to the degree of body expression can affect the information processing of emojis. Thus, emojis depicting only the upper body may be more effective in inducing empathy than those depicting the whole body, even though whole-body information more effectively induces empathy in real-world situations. Previous reports have found that emotionally relevant information can be detected from the movement of the upper body (dynamic qualities of the gesture; Glowinski et al., 2011), and the information contained in upper body motion in natural scenarios is enough for people to recognize emotion (Volkova et al., 2014). Therefore, to determine the type of emoji that is most effective for inducing empathy, this study proposed the following research questions: Research Question 1 (RQ1): Will empathy differences occur according to body language types (the presence of movement and contextual information) and degree of bodily expression (upper body and whole body) of emojis in both the joyful and sad emotions groups?

Research Question 2 (RQ2):
Will there be an interaction effect between body language types (the presence of movement and contextual information) and degree of bodily expression (upper body and whole body) of emojis in the joyful and sad emotions groups?

Instruments
In this study, the expression elements of emojis that generate empathy differences in mobile messenger services chosen were movement and contextual information, emotion type, and the degree of bodily expression; these are defined as "non-verbal expression elements of emojis." This operational definition was based on the fact that emojis that act as a proxy for non-verbal expressions (Walther, 2006) in mobile messages are categorized into the presence of movement and contextual information (Yang et al., 2017), emotional type (Chang, 2015), and the degree of bodily expression (Jeon, 2019). Among these, the key expressive elements of emojis for generating empathy were set as movement and contextual information for the body language of emojis. To examine the differences in empathy based on the presence of movement and contextual information of emojis, we set four "body language types" [Movement (x) + contextual information (x) = static body language type (Type A); Movement (x) + contextual information (o) = contextual information type (Type B); Movement (o) + contextual information (x) = movement type (Type C); Movement (o) + contextual information (o) = movement + contextual information type (Type D)] based on static body language (Table 1). This operational definition was based on the description of body language as nonverbal messages conveyed by physical movements (Knapp, 1978), and findings that emotional recognition based on body language is influenced by contextual variables (Wieser & Brosch, 2012).
In addition, we compared emotion type (joyful and sad emotions) and the degree of bodily expression (upper body and whole body). Joy and sadness are the most distinct basic human emotions (Russell, 1983). Furthermore, according to the circumplex model of Russell (1980) and the emotion category system of Wallbott (1998), joy is defined as an emotion with a pleasant valence and high arousal, whereas sadness has an unpleasant valence and low arousal. The categorization of the degree of body expression of emojis as the upper body or whole body, including facial expressions, was based on the classification of body language into physical units (gaze/stare, facial expression, gesture, and posture) by Birdwhistell (1970), and the classification of body language into symbolic meaning (head, torso, and legs) by Delsarte (Shawn, 1963). Also, since the mobile message has a small chat window, we considered the possibility that movement and contextual information of emojis are expressed more strongly when facial expressions and bodily expressions are combined than when facial expressions alone are presented.

Measurement and Reliability of Variables
In this study, the term "empathy" refers to the ability to understand another person's emotional state or experience, as well as the ability to share and respond to emotional expression (Blair, 2005). Therefore, we measured empathy using "emotional empathy" items taken from the empathy scale of Escalas and Stern (2003), Davis (1980), and Mehrabian and Epstein (1972). The scale used in the current study was built by extracting the two most suitable items from each of the three aforementioned scales: "When I look at this emoji, I experience the same emotion that the emoji expresses" and "When I look at this emoji, the emotion of the emoji (sender) feels the same as my own emotion." Items were scored on a 5-point Likert scale (1 = not at all; 5 = very much). The Cronbach's α was .855, which indicates that the reliability of the scale was quite high.

Study Participants and the Survey Method
It has been reported that 99.4% of South Koreans use KakaoTalk and that individuals in their twenties have the highest usage of mobile messenger services (Ministry of Science and ICT & Korea Internet and Security Agency, 2017). Therefore, we selected study participants from among college students across the country who used emojis on KakaoTalk. Stimuli were limited to "MUZI & CON" emojis (emojis with a human face and body shape, not text-type emoticons), which are distributed for free and express both joyful and sad emotions in the "Kakao Friends" emojis, which are the most popular KakaoTalk emojis (Kim, 2015). Stimuli selected for this study were presented in the KakaoTalk chat window to allow participants to imagine that the emojis were being received from their friends on the messenger app.
In addition, emojis are also used in place of text messages but are usually used with sentences to express or supplement the sender's emotional state and situation (Ai et al., 2017;Donato & Paggio, 2017). Therefore, in this study, each emoji was inserted into the sentence "I got a confession from him/ her today" in the joyful emotions group and in the sentence "I have broken up with him/her today" in the sad emotions group. However, in this study, emojis were placed to sufficiently reflect and measure the recipient's response to emojis as stimuli as follows. First, rather than inserting emojis into sentences, they were separated from the sentences and arranged in an emphasized form (size) in parallel. Second, even in a parallel arrangement, emojis were placed in front of the sentences to induce recipients to recognize the emojis before the sentences (Figure 1).
The survey was conducted on personal computers and mobile devices. All participants provided written consent for the use of their data obtained from the survey for this study.
Participants were asked to complete the questionnaire regarding their feelings and thoughts on the emojis presented in the KakaoTalk chat window. All stimuli and questions used in the survey were randomized to avoid a learning effect. Furthermore, we established a system to prevent unreliable answers to the survey. First, we added four independent questions to the survey to screen for respondents who randomly selected answers that instructed participants to enter a specific number. Second, to encourage thorough reading and answering of each question, the questions were sequentially presented after each stimulus had been displayed, with the next question being shown after the previous one had been answered. Third, the next stimulus was presented after a 30-second interval once the previous question had been answered.

Data Collection and Analysis Methods
Data were collected between February 28th and March 5th, 2018. Participants were 615 university students living in Korea who had used KakaoTalk. Participants were selected after completing a screening questionnaire (that collected data on sex, age, school year, area of residence, whether they had used KakaoTalk, whether they had used emojis, and daily frequency of emoji usage). Data obtained from a final total of 514 participants were analyzed after excluding six individuals who had not used emojis from the 520 participants who successfully completed the four additional questions added to prevent unfaithful responses.
The data were divided into joyful or sad emotion groups, and the difference in empathy according to body language types (the presence of movement and contextual information) and degree of bodily expression (upper body and whole body) was performed through two-way ANOVA. In addition, as an alternative, given the non-normal distribution, the analysis was performed using the Natural Logarithm (LN) for empathy, which is the main variable. For data analysis and processing, the SPSS 25ver program (IBM Corp., Armonk, NY, USA) was used.

Demographic Characteristics
There was a higher proportion of female than male participants in this study. Moreover, most participants were senior students, followed by sophomores, juniors, and freshman students. Regarding the daily frequency of emoji use, the most common frequency was 1 to 4 times, followed by >15, 5 to 9, 10 to 14, and <1 (Table 2). Table 3 shows the results of the two-way ANOVA conducted to confirm the difference in empathy according to body language types (the presence of movement and contextual information) and degree of bodily expression (upper body and whole body) of emojis in the joyful and sad emotions groups. The results indicate that neither the main effect nor the interaction effect was significant in the joyful emotions group. Alternatively, in the sad emotions group, there was a significant main effect of body language type on the difference in empathy (F = 7.356, p < .001). However, neither the difference in empathy according to the degree of body expression nor the interaction effect between body language types and degree of body expression were significant in the sad emotions group. Table 4 shows the descriptive statistics and post-test results of analyses of the main effects in the joyful and sad emotions groups. In the descriptive statistics, empathy was shown as A, B < C < D in the joyful emotions group, and B < A < C < D in the sad emotions group (see Figure 2). In addition, empathy was the same for the upper body and whole body in the joyful emotions group and was higher for the whole body than in the upper body in the sad emotions group. However, the post-test indicated that the empathy score was higher in Type C and type D than in Type A and Type B only in the sad emotions group (A, B < C, and D). Also, it indicated no empathy difference between the upper body and the whole body in both emotional groups.

RQ1 and RQ2
Tables 5 and 6 provide the descriptive statistics for analyses of the difference in empathy according to body language types (the presence of movement and contextual information) by the degree of bodily expression (upper body and whole body) in joyful and sad emotions groups. In the joyful emotions group, empathy was shown as A < C < B < D in the upper body and B < C < A < D in the whole body. Alternatively, in the sad emotions group, empathy was shown as B < A < C < D in both the upper body and whole body. In other words, it is difficult to trust the statistical results of detailed differences because the interaction effect is not significant, but when comparing average values, empathy showed the highest trend in type (D), combining movement and contextual information regardless of the emotional type and the degree of bodily expression. In addition, empathy showed the lowest tendency in the contextual information type without movement except for the upper body in the joyful emotions group (Figures 3 and 4).

Discussion and Conclusion
In this study, movement and contextual information were set as key expression factors for the generation of empathy based on previous neuroscience research to identify the type of emojis effective for inducing empathy. The emojis were grouped according to joyful emotions and sad emotions, and the difference in empathy according to body language types (the presence of movement and contextual information) and degree of bodily expression (upper body and whole body) was confirmed. As a result, results demonstrated that the empathy difference according to body language types was significant in the sad emotions group. In particular, empathy was higher in the movement type and the type combining movement and contextual information than in the static body language type and the contextual information type without movement.
First, these results confirm that when not only movement of emoji but also movement and contextual information are combined, empathy is reinforced. This could be because, basically, movement of emojis promotes emotional recognition (Pollick et al., 2001) as well as mirroring, which has a causal relationship with emotional expression processing (Niedenthal et al., 2001). In addition, this is because it has a positive effect on empathizing with another person's emotions by simulating the expression of emotions through enhanced mirroring and recognition of emotions, thereby generating emotional contagion. Furthermore, it is determined that the combination of movement and contextual information in this empathy generation process generates richer emotional knowledge and narratives and has a more positive effect on empathy by increasing emotion recognition.
Second, the fact that the empathy difference according to body language types of emojis was significant in the sad emotions group suggests that the influence of emojis' movement and contextual information is greater on sad emotions than on joyful emotions. In other words, according to previous studies, the arousal of emotions can be interpreted as the intensity of emotion (Lang et al., 1998). Thus, emotion recognition is easier for joyful emotions, which confer high emotional arousal, than for sad emotions, which have a low level of arousal (Alves et al., 2009;Garcia & Tully, 2020). In addition, emotions with a pleasant valence confer higher emotional arousal results in a greater mirroring of emotions (Fujimura et al., 2010;Greenwald et al., 1989;Witvliet & Vrana, 1995). Based on these reports, joyful emotions with high emotional activation and pleasant valence (Russell, 1980;Wallbott, 1998) result in a high level of emotion recognition and mirroring. Therefore, the presence of movement and contextual information of emojis may have a relatively smaller contribution to the levels of empathy induced in the case of joyful emotions. However, since sad emotions with a low emotional activation and unpleasant valence (Russell, 1980;Wallbott, 1998) have the opposite effect, the presence of movement and contextual information of emojis affects mirroring and emotion recognition in response to sad emotions, which results in the element playing a key role in generating discriminatory differences in empathy. That is, movements can be considered to reinforce mirroring and facilitate emotion recognition. In addition, according to previous studies (in adults), when contextual information is added, the increased rate of emotion recognition is higher for sad emotions (82% -96%) than for happy emotions (94% -99%; Theurel et al., 2016). This could be because adding contextual information to sad emotions, for which emotion recognition is not easy due to a low level of arousal, clarifies the emotional signal (Aviezer et al., 2008); this could also explain the positive impact on the increase in the rate of emotion recognition (Theurel et al., 2016). Based on these previous studies, we can conclude that the presence of movement and contextual information of emojis can increase the lowered mirroring and emotion recognition of sad emotions due to their unpleasant valence and low level of arousal. Also, for this reason, it is judged that the empathy difference according to body language types (the presence of movement and contextual information) was significant in the sad emotions group rather than the joyful emotions group in this study.
However, in this study, the difference in empathy according to the degree of body expression (upper body, whole body) was not significant in the joyful and sad emotions groups. This suggests that the upper body of the emoji has      the same effect as the whole body rather than that the degree of body expression of emojis does not affect empathy. In other words, in general, whole body emojis can generate a higher level of empathy than upper body emojis because they have a larger cumulative effect of emotion in terms of mirroring of emotions or emotion recognition (Flack et al., 1999); in fact, in this study, empathy showed a higher average value in the whole body than in the upper half in the sad emotions group. However, previous studies have reported that emotion recognition is possible even with movement information in upper body gestures (Glowinski et al., 2011) and that upper body movement can be sufficiently effective for emotion recognition in situations such as natural scenarios (Volkova et al., 2014). These contents show that the upper body alone can be sufficient for emotional recognition when movement and contextual information are combined as an expression element of emoji. Also, for this reason, it is judged that the difference in empathy between the upper body and the whole body was not statistically significant in this study. However, in this study, there was no interaction effect between body language types and degree of body expression in the joyful and sad emotions groups. This suggests that even if complexity occurs in the cognitive process as movement and contextual information are combined, empathy induced by an emoji is not a process affected by cognitive availability to such an extent that it appears higher in the upper body than in the whole body.
Finally, the results of this study are summarized as follows. First, this study confirmed that emotional type and the presence of movement and contextual information are nonverbal expression elements of emojis that affect the level of induced empathy. Thus, these elements need to be considered when designing and applying emojis to increase empathy. Second, our results demonstrate that empathy is strengthened in the movement type (Type C) and in the type combining movement and contextual information (Type D) in the sad emotions group. These emojis types (Types C and D) are considered to work more effectively when applied to emotion types with low mirroring and emotion recognition rates, such as sad emotions. However, there was a lack of any significant differences in levels of empathy according to body expression (the upper body and whole body). This finding, given the consideration that emojis are sent and received in real-time, indicates that the most effective emoji type for inducing empathy is the upper body type with movement and the upper body type with the combined movement and contextual information.

Implications and Limitations
In this study, by analyzing the empathy difference according to the nonverbal expression factors of emojis, the factors causing the difference and the effective types of empathy were identified. The present results are expected to serve as a practical guide for emoji design and relationship marketing. We identified the specific elements that affect the empathy induced by emojis, as well as the emoji types that are most effective for inducing empathy. In addition, the present results confirmed that the difference in empathy according to the degree of bodily expression in emojis was not significant (the upper body of emojis can replace the whole body), which could help designers save time and effort in designing emojis.
However, in this study, there was no significant difference in empathy between the movement type and the combined movement and contextual information type, so the sender faces the problem of having to choose between the two types. If you consider the economic aspect, you can select the type of movement, but if you consider the higher effect, you can select the type that combines the movement and contextual information that recorded the highest average value. However, since the current results were obtained without distinguishing the recipient groups, a more effective type may exist depending on the recipient group. That is, depending on the characteristics of the recipient (e.g., mirroring and emotion recognition ability), there may be a group that can sufficiently induce empathy with only movement and a group that the combined movement and contextual information type is more effective. This shows that a follow-up study that can supplement the results of this study is needed in that it can bring higher communication and marketing effects when emojis suitable for the characteristics of the recipient are applied.
Furthermore, in the results of the descriptive statistical analysis of this study, empathy showed a tendency to decrease by contextual information without movement except for the upper body in the joyful emotions group. This raises the suspicion that empathy generated through emojis may not be a process of mentalization or high-level simulation, in which contextual information plays a positive role as an additional source of knowledge and information, regardless of the presence of movement. That is, they may be a low-level process (low-level simulation based on mirroring; O'Neil, 2013) to the extent that it is negatively affected by contextual information without movement, although not to the extent that the interaction effect between body language types and degree of body expression occurs. However, since this content was not confirmed in this study, we propose a study to identify the cognitive process of emoji and the characteristics of the empathy generating process based on the mechanism for the generation of empathy via emojis.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.