Student Engagement With Teacher Feedback in Pronunciation Training Supported by a Mobile Multimedia Application

This study focused on students’ experience with a mobile multimedia application for a pronunciation course. An app was used for after-class pronunciation practice, through which the instructor gave out multimodal feedback to students. Student engagement with teacher feedback in the app was investigated from multiple data sources. Data collected in the form of the instructor’s response and analytics gathered through the mobile system revealed that students demonstrated a high level of engagement in the app, behaviorally, cognitively, and emotionally. Specifically, the instructor observed more submissions from and interactions with students compared with previous classes. Records of student-teacher interactions suggested that students reacted positively and actively to multimodal feedback from the teacher and expressed willingness to use the app for future learning. Questionnaire results further confirmed that a large number of the students perceived the app to be useful for practicing pronunciation tasks, as it made it easy for them to interact with the teacher, receive course materials, and submit recordings. Interviews among selected students revealed more details about students’ experiences and views. Some students reported accessing their peers’ readings and feedback from the teacher as a way of informing their own learning, which is made possible by the app. This study provides implications to enrich pronunciation instruction practice to maximize student engagement, which ultimately contributes to positive learning experiences and gains.


Introduction
The equivalent of CALL in pronunciation training is CAPT (Computer Assisted Pronunciation Teaching, often discussed as an alternative to classroom instruction in research syntheses (Thomson & Derwing, 2014). In Thomson and Derwing's (2014) review, CAPT accounted for nearly a third of all studies. However, similar to classroom-based studies, previous CAPT research has focused on investigating its efficacy on certain aspects of students' pronunciation development, and mixed results were produced largely due to variations in research design. Few studies looked into teachers' and/or students' experiences in the teaching/learning process. Furthermore, technology has been primarily used to provide "models" from native speakers, thus it functions as a source of teaching/learning materials. However, the utility of technology in facilitating teacher-student interaction is under-explored.
In the context of foreign language education in China, pursuing intelligibility as a communication-oriented learning goal (Levis, 2005;Thomson & Derwing, 2014) is not enough for English majors. On the contrary, when being native-like is the yardstick, new English majors' pronunciation is judged to be ridden with deficiencies. Given the difficulty to progress toward nativeness (Levis, 2005), feedback from the teacher is indispensable in pronunciation training. However, pronunciation is often sidelined by students. Furthermore, pronunciation instruction must accommodate possibly large variation in individual learning needs (Munro, 2016). More importantly, first-year university experience figures prominently in assimilating students into the unfamiliar academic and social culture and in fostering learner regulation over their learning. In this period when students often face multiple challenges, formative assessment, and feedback practices, especially supported by information and communication technologies, may be used to enhance student experience and develop their capacity for self-regulated learning (Nicol, 2009).
In fulfilling the instructional purposes, technological advances are likely to be especially valuable in identifying specific learners' difficulties and addressing them accordingly (Munro, 2016). Mobile-assisted language learning (MALL) is defined by Kukulska-Hulme and Shield (2008, p. 273) as formal or informal learning mediated via handheld devices, which is potentially available anytime, anywhere. Chinese students now increasingly rely on smart phones to conduct a major part of their learning. This modal shift is prompting teachers to make the transition to MALL as students are better (or solely) equipped for mobile learning. However, it is rather disappointing that few pedagogical attempts have been made to adopt mobile technology in pronunciation instruction, especially regarding teacher feedback.
This research was fundamentally classroom-based, germinating from a pronunciation course for first-year English majors in a key Chinese university. The study aimed to examine student engagement with teacher feedback in a mobile multi-media system. Data sources included the instructor's experience and evaluation of the usefulness of the system, students' responses to questionnaires and selective interviews, as well as learning records retrieved from the system. This study has pedagogical implications as its findings may encourage more language instructors to teach pronunciation in systematic and novel ways facilitated by technology.

Technology Use in Pronunciation Instruction
There have been considerable attempts to apply technology in pronunciation instruction. Of the 75 studies surveyed by Thomson and Derwing (2014), over two-thirds were classroom-based, while the rest were computer-assisted. While CAPT can encompass any form of computer-mediated pronunciation teaching/practice in a broad sense, Agarwal and Chakraborty (2019) review CAPT systems from a narrower perspective, defining them as tools that can record, detect, and diagnose mispronunciations, as well as generate suggestions targeting the errors.
CAPT has been employed variably in designing training methods. The very basic use of technology in pronunciation training is recording speech imitation, where student voices are recorded while following native-speaker input as a model to facilitate comparisons and later feedback and evaluation by the teacher. Self-paced recording allows learners more control over their speech production. CAPT has often been used to enhance learners' perceptual ability and sensitivity to segmental features. For instance, it enables speech recordings to be manipulated, such as altering vowel duration, to emphasize certain segmental cues. It facilitates High Variability Phonetic Training (HVPT), under which condition learners can enhance their perceptual skills by listening to L2 segmental contrasts in multiple phonetic contexts (Levis, 2016;Thomson, 2012). Baran-Łucarz and Cardoso (2015) investigated effectiveness and learner perceptions of English phonetics instruction with a learner response system. Learners perceived the technology as providing an anxietyfree, interesting, and exciting learning experience. In particular, active participation and involvement were identified, which led to better retention of learning material.
Technology enables the delivery of multi-modal inputs (Martinsen et al., 2017;Wisniewska & Mora, 2020). The emergence and popularity of podcasts offer an abundance of culturally authentic language inputs for pedagogical use. As a step further, podcasting has been employed in pronunciation training as a way to encourage students to share their output and collaborate on podcast-creating tasks (Lord, 2008).
Driven by the belief that pronunciation instruction should go beyond controlled contexts and with technology playing a role, Martinsen et al. (2017) investigated the effects of videobased shadowing and tracking exercises both in in-class and after-class training on L2 learners' controlled and spontaneous speech production. Learners' perceptions about the use of technology during the learning process were also examined. Such multi-modal exercises are found useful in improving pronunciation performance, particularly if it is measured through a controlled context, that is, read-aloud tasks. Leis et al. (2015) introduced three learning activities to students delivered through mobile technology, including videoing, practicing pronunciation, and testing. In pronunciation practice, students' reading of texts was audio-recorded and speech-to-text application generated transcripts to show which parts were pronounced inaccurately. Results showed that students were motivated to learn with mobile technology and they tended to be more autonomous in their learning.
It can be seen that technology has facilitated the access to authentic and multimodal inputs, enabled diversified forms of exercise such as shadowing and tracking, and promoted collaborative learning tasks. Technology-assisted pronunciation training is beneficial to learners in perception and speaking, and contributes to fostering autonomy and motivation.

Student Engagement With Teacher Feedback in Pronunciation Instruction
Teacher feedback addresses the gap in students' knowledge and ability and can potentially motivate students to keep working on their weak points. Feedback effectiveness has been well-documented in experimental research (Kluger & DeNisi, 1996). Generally positive effects for feedback have been identified in pronunciation instruction (Lee et al., 2015;Saito, 2013;Saito & Lyster, 2012). Feedback, as a form of social support, is useful for clarifying unclear message, reinforcing good practice, and highlighting weaknesses and persistent problems.
The utility of teacher feedback on students' assignments ultimately depends on how students engage with it . In Martinsen et al.'s (2017) summary of three critical components of pronunciation instruction, the third one is "engaging learners in a series of controlled, guided, and communicative practice activities with feedback." "Mindful" engagement featured by reflection, interpretation, deepening understanding, and changes in later behavior (Salomon & Globerson, 1987) should be promoted, rather than superficial engagement where students merely collect feedback without careful deliberation and further action on it . Enhancing student engagement with feedback may foster greater sense of responsibility and ownership for their learning among students .
Though feedback has been widely practiced both in and out of the classroom, the process of student engagement with feedback has received little direct attention in the pedagogic literature . One recent study that explicitly employs such a perspective is Zheng et al. (2020). They investigated students' response to teacher feedback on their translation assignments from the perspective of engagement. As teacher feedback plays a critical role in helping students identify the gap and guiding them toward good practices, it naturally deserves more attention from pronunciation researchers and instructors. However, the discussion around teacher feedback is very limited in pronunciation instruction studies, and the engagement perspective is largely missing.

Technology-Assisted Feedback
Computer-mediated communication generates opportunities for learners to practice speaking with native speakers and other L2 learners (Alastuey, 2010). In oral and written language learning tasks, teachers provide feedback orally or in a written form. Technology can facilitate the generation of written feedback, even as an alternative to human judgment. For example, Automatic Speech Recognition (ASR) was employed in Hincks (2003) and Neri et al. (2008) to provide feedback to learners on the intelligibility of their productions, but the discrepancy between ASR judgment and human judgment remains a topic of debate. Technology also enables the accessibility of multi-modal feedback. Hardison (2005) made an early attempt to explore multi-modal input and feedback by employing computer-based tools and speech videos in L2 prosody training. Apart from native-speakers of English providing global prosody ratings, visual displays of the pitch contour also functioned as feedback.
It can be seen that technology has mainly been used to provide and manipulate input for pronunciation instruction. One major limitation with CAPT is that it is conceived as a channel for delivering training input, but feedback of formative assessment information is also important in the whole teaching/learning process. As Hattie and Timperley (2007) point out, reducing the performance gap involves both cognitive and affective processes, which concern effort, motivation, or engagement. It follows that there is a need to understand student engagement in technology-assisted feedback. Moreover, similar to the distinction between CALL and MALL, mobile technology should also be included in the discussion of CAPT.
Student engagement with teacher feedback may vary and many times may be rather negative . Wellcrafted, prompt feedback may not be appreciated by students (Gibbs & Simpson, 2005), which seriously compromises its effectiveness. Technology use is intended to encourage student engagement with feedback (Hepplestone et al., 2011), yet opinions are polarized as to the preferences of both educators and students. Some expressed strong resistance, while others saw its benefits for learners. Despite some research attempts to uncover feedback givers' and receivers' perceptions, little is known about students' actual experiences of external feedback given by teachers through a multimedia system. In pronunciation instruction such research barely exists.
The current study is an effort in that direction. This classroom-based research, by applying a mobile application to pronunciation instruction, which enables multi-modal feedback on students' pronunciation recordings, aims to investigate students' experience with mobile-assisted teacher feedback in a naturalistic setting. The research question is: Does the mobile application contribute to student engagement with teacher feedback? To answer this question, multiple data sources are drawn on including response from the instructor, records obtained from the app, a questionnaire on the students, as well as interviews conducted with selected students. Fredricks et al.'s (2004) conceptualization of engagement as a multi-dimensional concept encompassing behavioral, emotional, and cognitive components, though proposed from a broader educational perspective, has been adopted widely in analyzing engagement with specific tasks. Behavioral engagement refers to students' participation in the learning activity, which can be observed and tallied (e.g., asking questions); emotional engagement relates to affective responses to tasks, teachers, and peers; and cognitive engagement encompasses reflections and willingness to invest (mental) effort to grasp complex ideas and master difficult skills (Fredricks et al., 2004;Handley et al., 2011). In the current study, student engagement with teacher feedback is defined as their emotional response to feedback provided by the teacher and effort made both behaviorally and cognitively to make sense of it and to develop strategies for future learning. Feedback herein is not limited to that is directed at the specific learner, but includes that which is given to other learners, which is accessible to all students.

Defining Engagement With Feedback
As to the depth, or more specifically, the state or activity of engaging actively with feedback, "mindful" (Salomon & Globerson, 1987), "active" , "agentic" (Reeve & Tseng, 2011), "proactive" (Winstone et al., 2017) have been proposed as modifiers. Researchers have noted that when students play active roles in the feedback process, not simply as passive receivers, their self-assessment and self-regulation skills will develop, making them more independent learners (e.g., Butler & Winne, 1995). It is widely acknowledged that engagement is not static; rather, it is subject to change as a result of students' experiences and (dis) satisfaction with previous feedback encounters

Context and Participants of the Study
This study took place in a pronunciation course offered to first-year English majors in a key university in China. Two classes, 43 students (8 male and 35 female, age 18-19) in total, participated in this study, who met once a week for 90 minutes divided into two sessions with a 10-minute break. At the start of the course, it was understood that they had no previous experience of learning pronunciation through technology. The pronunciation instructor of the two classes had over 16 years of experience in teaching oral and listening courses. She had never employed any app or software in teaching pronunciation previously. She followed the research design in conducting her classes. The course lasted 12 weeks. Students used a WeChat-based app named "Xiaodaka" for pronunciation training for in-class and after-class practice.
Xiaodaka was a social networking mini-app or applet that allows users to find communities of interest and interact with fellow users that share the same interest. By regularly posting activity records in the form of text, audio, video, or picture, users can share their experience and attract attention from other users. Xiaodaka is not an independent app, but a mini-app that is dependent on WeChat, the most popular social networking app in China. For convenience, this miniapp will be referred to as an app for the remainder of this article. In the fourth week, a dedicated pronunciation group was created in Xiaodaka and all students joined the group upon invitation by the instructor (see Figure 1).

Learning Activity Design
In terms of focus of instruction or scope of training (Thomson & Derwing, 2014), the course syllabus covers both segmental and suprasegmental aspects of pronunciation, including individual sounds, linking, word stress, sentence stress, intonation, rhythm, voice projection, speaking rate, etc. The app was introduced to the students in the fourth week and was used for 7 weeks. Each week, the teacher would upload course materials in the app in the form of PDFs, audios, videos, and pictures. Each week, the teacher would create several reading tasks, and students could choose tasks to their liking and submit their own reading. The teacher would then select some work from the submission pool to give feedback. The feedback usually contained a screenshot of the script with annotations that highlighted the pronunciation problems as well as one or several audios of the teacher's comments to elaborate on the annotations. Feedback was mainly in the form of explicit phonetic information (Saito, 2013), with a formative orientation (Black & William, 2009), and elaborated in nature (Shute, 2008). While focusing on students' problems, the teacher also gave positive comments on what the student had done well or acknowledged their progress in comparison with their prior performance. Submission of reading was not compulsory, and multiple submissions of the same task were welcome. In the app, there were four ways for a participant to interact with the speaker. They could "like" the work, "comment" on the work, "share" the work with WeChat friends or groups, or "evaluate" the work. "Comment" allows text input, pictures, audios, and videos, while "evaluate" has two more features than "comment": one is rating the work by giving stars (maximally five stars, equivalent to 1-10 points in total) and the other is selecting from a pool of ready-made brief remarks or creating a new remark that can be added to the pool). A speaker could also respond to feedback from others by further interacting with the feedback-giver. As the rating function in "evaluate" was not very fine-grained, and the teacher considered it more appropriate to downplay the importance of pursuing performance goals (Meece et al., 1988) in after-class activities, teacher feedback focused on giving comments and star-rating was rarely used. When stars were indeed granted, the teacher mainly gave five stars to students who had done almost perfectly in a reading assignment.

Data Collection
Students' analytics in the app were collected as an objective data source for analyzing their engagement in the system. As no control group was employed in the current study, the instructor was interviewed in an open-ended manner at the end of the experiment to obtain her evaluation of student engagement, in comparison with her previous experiences regarding pronunciation instruction. Students' perceptions of the technology and engagement with teacher feedback were further examined through a questionnaire and an interview.
Learning analytics. The app provided analytics related to all participants, including students' logging data, submissions, and comments/evaluations records related to an assignment.

Questionnaire.
A questionnaire was designed to obtain information on four major aspects: (1) students' learning behavioral engagement (7 items), (2) students' perceptions about the usefulness of the app in general (4 items), (3) students' intentions to use the app in future learning (1 item), and (4) students' preferences for teacher feedback (3 items). Question responses were mostly rated using a 5-point Likert scale, the end-points being "strongly disagree" (1 point) and "strongly agree" (5 points). The questionnaire also included two multiple-choice questions on estimated time spent on pronunciation practice and on using the app. Therefore, the total number of questions amounted to 17.
When designing the questionnaire, the author discussed with two other colleagues who have over 15 years of experience in teaching oral English courses. It was agreed that the items would adequately solicit the most relevant information regarding students' experience and views. Then three students from the study were asked to complete the questionnaire to ensure that the language was clear and without confusion. Some items were reworded for clarity. These students' responses were included in the analysis as clarification was provided immediately when they were answering the questionnaire.
Interview of selected students. Outward behavior offers direct evidence of engagement, but there could also be engagement that is invisible to the teacher . Furthermore, unlike surface-level or superficial engagement that is usually observable, deeper level engagement, or mindful reflection upon feedback, is more elusive as it is often a solitary process  happening internally within the learner's thinking. Therefore, interviews may be a good way to elicit such information in an indirect manner. Eight students participated in a semistructured, one-on-one interview with the researcher's teaching assistant, a post-graduate student, offering more detailed feedback on their experiences and perceptions of the technology-assisted course activity. These interviewees were selected based on their participation in reading activities in the app (see Table 1). About 4 were considered nonactive students, with each submitting 6 to 9 recordings in total, and the other 3 were active participants, with each submitting 18 to 39 recordings in total. The interviews were conducted within 2 weeks after the course was over, and were recorded and transcribed for analysis. Each interview took approximately 30 minutes. Oral consent was obtained from all interviewees before the interview started. Students' responses were used to supplement quantitative results.

Student Engagement as Reflected in Learning Analytics Collected by the App
Reading tasks and submissions. Altogether the teacher created 32 reading tasks. Figure 2 displays the tasks given out in 7 weeks, together with the number of submissions for each task. It can be seen that, on a weekly basis, the first week after the app was introduced saw 57 submissions, indicating that some students did the same task repeatedly or did more than one task. Week 2 submissions continued to increase, totalling 75. The largest number of submissions came from tasks created at week 3, with 104 recordings in total, meaning each student submitted 2.5 recordings on average. Week 4 tasks elicited 94 recordings, ranking the second. The task for week 5 did not received any response and submissions for week 6 and 7 declined dramatically compared to the first 4 weeks.
The most popular task, which generated 40 submissions, was created in week 2. About seven tasks received submissions ranging from 23 to 27 and five tasks received submissions ranging from 10 to 19. About 15 tasks were done less than 10 times and no one submitted recordings for the remaining 4 tasks.
Submissions by student. In total students submitted 521 recordings, roughly 12 submissions per person on average. The most active student had 39 submissions and the least active submitted three recordings. Detailed statistics are illustrated in Figure 3.
One low frequency user explained why she did not log into the system very often: The tasks in the system were not compulsory. I have a habit of doing morning reading. So I usually get up very early to read materials that are required to be memorized in my intensive English course. I will pay attention to pronunciation when I read and memorize these materials. . . . The pronunciation materials

are good, for appreciation and enjoyment. But I have exams for Intensive English class. So I would spend time memorizing those. . . . I don't have a lot of interest in English in general, though it is my major. I would say I'm utilitarian. (Student #30)
From this student's answer, it is clear that some students would not prioritize course tasks if they are not related to grade, particularly when courses compete for students' time and attention. For students with a utilitarian orientation, making tasks compulsory may result in more "time on task," a superficial criterion of behavioral engagement, but that does not guarantee mindful engagement. Furthermore, when students' pronunciation is generally good in the first place, or they perceive themselves to be so, their level of readiness-toengage may be even lower. This same student, who had been educated in a foreign language school before coming to college, did not consider pronunciation to be her priority.

I don't think pronunciation is a big problem for me. I was once recruited by the radio station, you know, but I quit. I know sometimes I made some mistakes in reading, but it's just because I didn't pay attention. I know how to do it right, with or without feedback. I will spend more time on other courses to get high grades. (Student #30)
It is clearly difficult to motivate students when they perceive the pronunciation course to be easier or less important than other courses. Student #30 was satisfied with her ability in pronunciation, and decided to shift more attention and energy to the more demanding courses. In the final exam, this student was ranked among the 25th to 50th percentile. In comparison, Student #42 achieved a higher grade, raking among the top 10th percentile, but she was still very active in practicing the tasks.
Another non-active student, who ranked among the 25th to 50th percentile in the final exam, gave different reasons for her lack of participation:

I consider myself a low-frequency user-one or two recordings per week. I normally read at the weekend, because the dormitory is quite noisy and I am quite busy every day with all the work. I need to find a quiet place. Another reason is that I tend to use my phone to record my reading. Then I would choose among several recordings and upload the best one. But I can't upload an external recording directly from my phone, so I need to transfer the recording from the phone to my computer and then upload. It's too much trouble. (Student #16)
Student #16 has pointed out two very important reasons that seem to have prevented some students to use the system more actively. One is heavy workload, which forces the students to prioritize the mandatory tasks from other courses; the other is complexity of technological operation. Though the system was designed to facilitate reading and recording anytime and anywhere, many students preferred to use voicerecorders on their phone, which made it possible for them to save each version of reading. They did not like the idea of pressing a button, speaking, and uploading. Neither were they sure whether they could produce better recordings with more practice.
Students' responses to teacher feedback in the "comments" section. Analytics generated by the app showed that the teacher gave feedback on 135 recordings, and 58 of them, or 43%, received explicit responses from students. The comments students posted following teacher feedback could be categorized into several types: A break-down of responses was illustrated in Figure 4. Feedback is meant to be a two-way process , a dialogue (Boud & Molloy, 2013) which involves exchange, interpretation, and transformation of information . In the multimedia feedback environment, students interacted with the teacher in the comments section, to exchange information and clarify understanding. As Figure 4 shows, explicit affect is expressed in nearly all responses except two. Over a third of responses were purely affective in nature, acknowledging the reception of teacher's comments by saying "Got it!", "Thank you!", "Than you for your hard work!", often with emojis attached to indicate joyfulness (e.g., smiley face) or appreciation (e.g., rose). About 28% of responses were a combination of affection and behavioral commitment. That is to say, students expressed gratefulness for teacher feedback and an intention to take further action or make a further effort. Examples are like: "I see. I will make another recording." "Thank you! I will pay attention next time!" "I think so, too! I will practice more!" These responses were not considered to involve a cognitive element because they did not refer to any specific understanding or action. Those that do involve a cognitive aspect accounted for 35% of all responses. Examples of a cognitive response are: "Thank you, (teacher name)! I will pay more attention to my stress! (happy face)" "Oh Oh Oh I am so happy! I was lazy not pronouncing /m/ in 'circumstances' properly, the same with 'sometimes' when I speak too fast. (awkward face) Thank you!" These comments indicated that students were utilizing their self-evaluative skills, integrating teacher insights into their understanding and consequently re-shaping their evaluation system.
The responses in general demonstrated active interaction between students and the teacher. By exchanging comments in the system, students are actually having a dialogue with the teacher, which can support cognitive engagement . To begin with, comments showed that students had no trouble understanding the feedback message. Secondly, joy upon receiving comments from the teacher was obvious, suggesting a high level of satisfaction and positive experience. Thirdly, the fact that over two-thirds of responses expressed an explicit commitment to further actions demonstrated an ability and willingness and even eagerness to act on the feedback. Applying feedback insights to the next assignment can be seen as further/latent action, which necessarily entails a deep level of engagement. Such manifestations of "commitment to change" or "readiness to engage" Winstone et al., 2017) serve as good predictors of actual engagement in later encounters. All in all, learning analytics and feedback records showed that teacher feedback had a very positive impact on students' emotions, cognition, and behavioral intentions. These results supported Kluger and DeNisi's (1996) view that feedback induce strong affective reactions, change the locus of attention and promote behavior regulation.
The following sections report interview and questionnaire results, representing the instructor's subjective assessment of student engagement, students' perceptions about using the app for learning pronunciation and their preferences for feedback.

Instructor's Evaluation of Student Engagement
The instructor was interviewed upon completion of the entire intervention process. The open-ended interview lasted about 30 minutes, starting from the instructor sharing her views and experiences of using the system in her teaching, with a special focus on her evaluation of student engagement. The audio-recorded interview was transcribed and analyzed following Fredricks et al.'s (2004) conceptualization of engagement as comprising the behavioral, cognitive, and emotional aspects. Overall, the instructor identified a relatively high level of engagement in comparison with the instructor's previous pronunciation instruction experiences. She commented: "Previously, I would give students reading tasks by sending them a bunch of files in a chat group. If I did not make it compulsory, very few of them would complete the tasks. Some didn't even bother to open the files. This time, I got more submissions than expected and the interaction with students and among students was very frequent." Behaviorally speaking, the instructor noticed that the frequency of participation in the reading tasks was higher than previous classes, as suggested by the number of submissions in total and by each student. Repeated submissions showed that some students valued teacher feedback to a great extent and were conscientious to make improvement based on teacher feedback.
Cognitively speaking, the instructor believed that feedback in the form of snapshots of marked reading scripts along with explanatory audio served to draw students' attention to their pronunciation problems and formed the basis for follow-up interaction. "As the feedback was not transient any more, students would review my written and oral comments carefully. Then they would 'come to me' and communicate their understanding with me. Sometimes, it was a negotiating process." "The fact that other students could see one's feedback enabled them to assess whether they had the same problem, and whether they could learn something from the feedback." Emotionally speaking, the instructor held the view that submitting their work in the system created an expectation for feedback and the teacher's response contributed to boosting students' motivation to participate more actively. She said the students were "joyous" in completing the tasks and they seemed to enjoy practicing pronunciation.
"Even as my feedback was generally focused on their deficiencies, they were not frustrated by my negative comments. My feedback actually prompted them to make another, better submission. . . . At the end of the course, they seemed satisfied with their experience throughout the weeks."

Learners' Self-Reported Behavior
The classic approach to operationalizing active engagement is to measure "time-spent" (Kulhavy & Stock, 1989). Though it was considered as a "crude measure" therefore not adequate, time-on-task still provides a glimpse into students' level of engagement. Figure 5 shows students' self-reported time spent on pronunciation course-related work and on the app (n = 43). It can be seen that only four learners spent less than 30 minutes on pronunciation coursework, which was considered limited. About 18 students (42%) spent between 30 minutes on pronunciation practice, which was a fair amount of input. About 12 students (28%) spent more than an hour and 9 (21%) spent more than 90 minutes each week on practicing pronunciation, suggesting a high level of engagement. On the face of it, 1 or 2 hour per week does not seem to be a lot of input; however, these numbers should be put into the context of overall workload for English majors in a key university in China. On the one hand, students' schedule is packed with different courses, leaving them limited time for finishing assignments given out by course instructors. On the other, students generally do not consider pronunciation as important as other aspects of English skills, so they normally spend time on more urgent assignments. In addition, the instructor also confirmed that spending more than 60 minutes on pronunciation practice was indeed out of expectation based on her teaching experience. Table 2 displays the Likert-scale question items and students' responses (n = 43). The first item further asked whether students mainly used the app for pronunciation practice. Students' responses basically matched their responses to the previous questions on "time-spent." The self-reported study time indicates that students spent a fair amount of time each week after class on practicing pronunciation, which testified to their behavioral engagement.
Item 2 examined the frequency of logging into the app for practice. A total of 25 students accessed the app on a weekly basis, accounting for more than half of all students. Seven students admitted that they did not use the app on a regular basis.
Item 3 showed that nearly half of the learners were willing to access the app for practice even if there was no compulsory assignment. Nine students did not have a strong internal motivation; instead, they were driven by assignments. These numbers largely matched the responses to Item 2.   The mean for Item 4 is very high, at 4.28 nearly 80% (34) of students said they would download and save the course material from the app. In the interview, some students acknowledged that the materials provided by the teacher were interesting and valuable for learning, so they harbored a strong wish to save them onto their own devices. Saving the files also made it possible for students to import them into other apps where they could listen to and imitate the speaker.
Item 5 concerns students' attention to feedback. The mean for this item is the highest among all items. All learners said that they would pay attention to whether the teacher gave them feedback on their submission, which suggested that teacher feedback was very likely to encourage students to use the app more.
Items 6 and 7 are related to peer interactive behavior. Results showed that around 60% of students were curious about knowing their peers' performance and embraced the idea of social learning. It suggested a large number of students were eager to obtain more information to inform their own learning. Interview results showed that students would primarily listen to their close friends' recordings, and gave a "like" to them as a way of showing support. As in the words of Student #42:

Students' Perceptions of the Usefulness of the App
Four items are related to student-perceived usefulness of the app. The means for these items were all above 4, indicating very strong positive perceptions from the students of the usefulness of the app. Students almost unanimously believed that the app increased their interaction with the instructor, facilitated receiving course material from the teacher, and submission of assignments were easy through the system.
Reponses from interview revealed that students felt that they were compelled by the user ranking to use the app more. In summary, the overwhelming majority of students perceived the app to be useful as it offered convenience in interacting with the instructor, facilitated two-way exchange of material, and pushed the students to devote themselves more to the course.

Intention to Use the App
The next item asked about students' intention to use the app in future learning. The mean for the item was extremely high at 4.40. About 41 out of the 43 learners supported to use the app in pronunciation practice, and 14 of them expressed a strong intention to use it. Therefore, it can be concluded that the app use experience was very satisfactory for the students and they were willing to continue to adopt the technology in later training.

Preferences for Feedback
Apart from the above general questions, the questionnaire contained seven items more specifically related to students' preferences for feedback. About 38 learners (60%) expressed a desire for a grade on their performance in addition to feedback comments. Some students mentioned in the interview that a grade would give them an idea of the extent to which they were fulfilling course requirements. It can be inferred that students had a performance goal (Meece et al., 1988) in mind. Another explanation is that students were curious about their progress and performance on the tasks and they believed a grade would provide a numerical measure of their performance, and may probably serve as a reference for their final grade.
Is teacher feedback a source of motivation for the learners? Students' previous responses related to their behavior have attested to the relationship. Here responses to two more items further supported that positive relationship. About 25 students did not think that they would use the app less if they did not receive teacher feedback. This suggested that these learners did not engage in practice activities in order to get teacher feedback, and that they did not consider using the app to be a compulsory act. Eight learners indeed admitted that they would be demotivated if the teacher reduced the amount of feedback. On the other hand, if the teacher provided more feedback or more detailed feedback, most students (39) said they would be more active users. No students disagreed with the statement. These results indicated that the majority of learners perceived feedback as helpful to their learning, and they welcomed more of such feedback and would therefore be willing to use the app more. In questions related to perceived benefits, most students admitted that they felt the app acted like an external force for them to engage in learning activities. When that result was interpreted together with students' perceptions of teacher feedback, it can be concluded that on the one hand, students opted to use the app voluntarily, and on the other, it was the teacher's feedback behavior in the app that drew students to use the app more because feedback made the app use more relevant to individual learners.

Conclusion
App-generated learning data, the instructor's response, as well as questionnaire and interviews among students testified a relatively high level of student engagement with appmediated teacher feedback and satisfaction with personal progress and learning experience. The app was considered easy to learn to use and app-facilitated activities were engaging and conducive to student learning. Similar to what Higgins et al. (2002) have found, students in the current study expressed joy over learning. Though app use was completely voluntary and reading tasks were optional (not related to course grade), students were motivated strongly to participate in the activities, and teacher feedback delivered in the app was an important source of motivation.
Teacher feedback plays an instrumental role in enhancing students' learning experience. Wang et al. (2019) found that detailed feedback provided in a computer-based environment enhanced learning motivation and promoted learners' feedback perception, that is, their response to the quality and use of feedback. In the current study, receiving teacher feedback indeed acted as an impetus for some students to submit more recordings and some would send private messages to the teacher to seek feedback.
Affective engagement may affect students' behavioral and cognitive engagement; therefore, teachers should pay attention to students' emotional responses, such as (dis)satisfaction and anxiety. To encourage behavioral engagement, especially follow-up actions taken on feedback, teachers can adopt reward mechanisms or use gamification, as suggested by some of the participants. To promote deeper-level engagement with feedback, or cognitive engagement, dialogue between students and the teacher is important . Therefore, communication channels should be created and utilized either in the multimodal system or as a standalone tool (like the WeChat app used in this study). More importantly, an amicable environment should be created to promote socializing.
This study is limited in that it did not examine the instructional effects of technology-supported feedback on learners' pronunciation performance, nor did it compare between participants and a control group. Pre-post designs and controlled experiments with large sample sizes may be adopted in future studies. Feedback design may be refined to focus on specific features. A student-centered approach that adopts collaborative learning and facilitates flexible learning can be explored (Shadiev et al., 2017). In addition, even as the course was over, it would be interesting to conduct a follow-up study to see if and how the students really continue to use the app, which could offer more evidence on the usefulness of the app.

Availability of Data and Materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.