This study examines how three factors affect students’ reactions to critical feedback on an assignment—amount of feedback (none vs. low amount vs. high amount), source of feedback (instructor-provided feedback vs. peer-provided feedback), and the situational context of the feedback (revision of paper is or is not possible). An incomplete 3 × 2 × 2 between-subjects experimental design was used to expose students enrolled in a basic marketing course to hypothetical feedback scenarios that varied the aforementioned factors. N-way analyses of variance and analyses of covariance revealed main and interaction effects. Students generally responded more negatively to higher versus lower amounts of critical feedback provided by the instructor. By contrast, when peers provided the feedback, students in most cases responded similarly to low and high levels of feedback, and they indicated that a high level of peer feedback was more helpful than a low level of peer feedback. Allowing students the opportunity to revise their work had two interesting effects. The revision opportunity made them feel more dissatisfied with their current grade, and it also made them more receptive to the critical feedback. The results suggest much promise for increased use of peer-provided feedback as well as judicious use of instructor-provided critical feedback.

Marketing educators and students alike regard instructor-provided feedback on student work to be an essential tool in the learning process. When instructors alert students to the strengths and weaknesses of their work, that feedback provides a means by which students may assess their own performance and make improvements in their future work. The act of providing feedback allows instructors to communicate overall quality standards and expectations, and to explain their grading rationale regarding the work of each student.

For the purposes of this article, we define critical feedback as constructive information students receive from instructors about their performance on assignments and exams. In marketing courses, feedback frequently is given in the form of qualitative written comments, suggestions, and corrections that point out weaknesses of work on an assignment and/or illustrate how the student’s work can be improved.

Most instructors view the provision of feedback to be part of the job of teaching and spend a good part of their time writing individualized and substantive feedback comments on student work. Such feedback explains or justifies the earned grade, and most students expect to receive feedback. In their classic article, Chickering and Gamson (1987) include prompt provision of feedback as one of seven key principles of good practice in undergraduate education. The authors emphasize that students need to receive suggestions for improvement and opportunities to improve. Several studies aiming to define and describe the characteristics and practices of excellent marketing professors found, among other attributes and behaviors, that they provide timely and constructive feedback on exams and assignments. These are studies about both faculty perceptions (e.g., Conant, Smart, & Kelley, 1988; Smart, Kelley, & Conant, 2003) and student perceptions (e.g., Faranda & Clarke, 2004; Gruber et al., 2012). Providing clarifying and helpful feedback is seen as a form of mentoring students (Peltier, Drago, & Schibrowsky, 2003; Peltier, Schibrowsky, & Drago, 2007), as supportive of student learning, and as contributing to instructor–student rapport (Granitz, Koernig, & Harich, 2009).

However, many marketing educators question the efficacy of their efforts to provide such feedback, complaining that students are really only interested in the final grade and pay little attention to the feedback as a tool for reflection and development. Although students express a desire for feedback (Fluckiger, Tixier y Vigil, Pasco, & Danielson, 2010), many students do not appear to use the feedback they receive. This becomes especially germane and frustrating when faculty members are subject to high student–faculty ratios, are responsible for large classes, or teach courses requiring a high level of written and oral presentation as is typical of marketing curricula. The effort to provide substantive feedback that in turn students may only ignore can be mentally, emotionally, and physically exhausting for the instructor.

Students sometimes report that one reason they do not attend to feedback is because their previous experiences with feedback have been confusing or otherwise not helpful. Unfortunately, even very carefully crafted feedback may be ignored if a student has already received feedback that he or she experienced as being of little value. Students reported to Weaver (2006), for example, that they did not always understand the meaning of terminology used in instructor-provided feedback and that they were left unsure of what their instructors were advising them to do to improve. Students further stated that instructor-provided feedback is too often vague and nonspecific, lacking in guidance, too focused on the negative, and unrelated to assessment criteria. Such criticism would seem to place an even greater burden on instructors to provide more and higher quality feedback.

Despite the central and pervasive role of feedback in marketing education—and the fact that providing feedback is effortful and time consuming for instructors—feedback is not an area that has received much research attention in the marketing education literature. Given the beliefs that educators and students hold about the importance of feedback (Fluckiger et al., 2010; Higgins, Hartley, & Skelton, 2002; P. Hyland, 2000) and the amount of time and effort marketing instructors devote to providing feedback, we believe students’ response to feedback is worthy of further study.

Within the marketing education literature, Debuse and Lawley (2011) noted that providing timely, constructive and personalized feedback to facilitate student learning is a core function of marketing education. Yet, the provision of such feedback has become increasingly unsustainable as class sizes grow, modes of instruction expand, and academic workloads increase (Gillespie, Walsh, Winefield, Dua, & Stough, 2001). The use of rubrics, especially when aided by technology, can help streamline the process. Debuse and Lawley (2011) offered a computer-based marking tool (called SuperMarklt), Czaplewski (2009) offered a computer-assisted grading rubric, and McBane (1996) developed a system of barcodes for providing students with detailed feedback on written and oral assignments, all as possible solutions to the inherent time and physical demands of providing feedback. Such tools seek to allow instructors to offer high quantity, detailed, and specific feedback while making the effort less taxing for the instructor. However, others have noted that the use of standard rubrics sometimes is felt to be too constraining and impersonal (Czaplewski, 2009).

Ackerman and Gross (2010) took a different approach, questioning whether students actually value the quantity of feedback that many instructors are prone to supply. The researchers asked whether students perceive a high level of feedback as beneficial, or if in fact there can be too much feedback. The authors found through an experiment involving a hypothetical graded assignment that students seemed to prefer to receive fewer rather than more feedback comments from an instructor. Furthermore, students responded no more positively to receiving many feedback comments than to receiving no feedback comments at all. Students in the authors’ high feedback condition felt the hypothetical instructor had a more negative impression of them as students, liked the instructor less, and perceived the feedback comments to be less fair. Students perceived low and high levels of feedback to be equally helpful. Finally, students who received the high feedback condition indicated less satisfaction with their own performance.

The authors concluded that if an instructor is most concerned with being liked and with having students be receptive to feedback (because students believe the feedback is fair and/or because students feel that the instructor has a positive impression of them as students), then an instructor might want to provide only a modest amount of feedback. On the other hand, if the instructor wants to motivate students by making them feel less satisfied with their performance, the instructor may want to provide a greater amount of feedback.

One of the limitations of the Ackerman and Gross (2010) study was that the feedback was given largely as an explanation for a student’s final grade on an assignment. Such feedback may provide information to help a student to perform better on a future assignment, but the student has no opportunity to improve the performance or grade on the present assignment. As such, it follows that a student may feel less motivated to attend to the feedback. Even those students who do attend to the feedback may have difficulty translating the feedback into advice for future assignments.

Looking Beyond One-Time Instructor-Provided Feedback

In the present study we seek to extend the work of Ackerman and Gross (2010) by asking what would happen if high and low levels of feedback were offered under different sets of realistic circumstances. Would students have the same overall negative responses to high levels of feedback? Would they still prefer to receive fewer to more feedback comments?

First, we ask what would happen if an instructor allows students to consider the feedback as they continue working on or revising their work on an assignment. Many instructors use iterative writing assignments and other assignments where feedback is provided along the way toward completion. Would students value feedback more under these conditions? Would they view the feedback as more worthwhile because they could put it to immediate and practical use? Such questions are also relevant to workplace situations where ongoing feedback is given to employees with the goal of continuous improvement and in any context where individuals are pursuing goals (Ashford, Blatt, & VandeWalle, 2003; Finkelstein & Fishbach, 2012)

Second, we ask what would happen if the same feedback was provided by students’ peers rather than by instructors. Peer evaluation and peer grading are means that some instructors have used to alleviate part of their workload while still providing students with ample feedback. Furthermore, requiring peer feedback can serve the purpose of helping to validate the professor’s assessments and critiques on subjective work (Gopinath, 1999). Students may complain that an instructor’s feedback is overly critical, but when they encounter similar feedback from their peers they may come to understand that the instructor’s critique was in fact quite fair and not idiosyncratic.

These concerns suggest three broad research questions. The three research questions are presented and discussed in the following section.

Research Question 1: How will students react to differing levels of critical feedback?

The current study seeks in part to replicate some of the findings of Ackerman and Gross (2010). The authors’ study found that students had a generally more positive response when provided with fewer feedback comments versus more feedback comments. One reason was the negativity effect. Even carefully worded feedback comments given for purposes of offering constructive suggestions for improvement are likely to be viewed by students as negative. Jones and Davis (1965) argued that negative cues are more salient than positive cues in impression formation because it is more socially normative to provide others with positive information. Thus, negative information such as critical feedback is perceived as inherently more informative and indicative of the provider’s true feelings. As a result, it is likely that the more critical feedback students receive, the more negatively they will react.

Critical feedback may also lead students to believe that their instructor has a negative impression of them. Based on the prior findings of Ackerman and Gross (2010), we expect that even when students receive the same final grade on a written assignment (e.g., a B−), the students will perceive their instructor to have a more negative impression of them as students when that grade is accompanied by a higher number of critical feedback comments than when the grade is accompanied by less critical feedback. Furthermore, we expect that when students receive a high level of critical feedback, this feedback may have other repercussions on the students, such as causing them to like the instructor less, to feel anger, and to experience less happiness. Higgins et al. (2002) and Carless (2006) found that critical instructor-provided feedback, although intended to be helpful, was frequently viewed by students as not helpful. What an instructor might provide as a suggestion for future improvement, a student might perceive to be merely a critique that serves to discourage. Such feedback contributes to perceptions of the instructor as authoritarian, overly judgmental, and detached. Similarly, Weaver (2006) discussed the potential of critical feedback to be misunderstood and to cause anger and hurt.

We are interested also in the attributions that students form as they are affected by the amount of feedback they receive. We know that attributions sometimes satisfy a self-serving motivation to protect positive beliefs about the self that may be inaccurate (Forsyth, 1980; Marsh, 1986). This self-serving bias further suggests that people attribute successes to internal causes and failures to external causes (Miller & Ross, 1975). Individuals make self-protective attributions under conditions of failure and self-enhancing attributions under conditions of success. For example, a student who earned a poor grade on an exam may attribute his or her grade to an external cause such as chance or luck. Conversely, a student who performed well might attribute his or her performance to internal causes such as his or her ability or amount of effort.

Combining ideas from the negativity effect and self-serving bias, we expect that a student should be more likely to avoid or discount an instructor’s critical feedback because that feedback will be perceived as reflecting the instructor’s true assessment of the student and this assessment may threaten the student’s positive self-perceptions (Ashford et al., 2003). According to the self-serving bias, we expect that when a student’s positive self-perceptions are threatened by receiving a high level of critical feedback, he or she will be more likely to attribute performance to external causes rather than to internal causes such as personal effort or ability. However, when a student receives a low level of critical feedback, he or she should be less likely to be threatened by this feedback and will, therefore, be more likely to attribute his or her performance to internal factors such as his or her own ability and effort.

Research Question 2: How will student reaction to critical feedback differ when students are provided an opportunity to revise their work versus when they are not provided an opportunity to revise their work?

Next we ask what would happen if, rather than providing feedback comments along with the final grade on an assignment, an instructor provided feedback as part of a process that allows students to consider the feedback as they continue working on or revising that assignment. Would the negative reactions to feedback accompanying a final grade on an assignment still occur when students are able to revise their work? Or would students have a greater appreciation for receiving feedback that can help them to improve their performance on the current assignment?

Instructors across disciplines argue for the provision of iterative or formative assessment. Such feedback can be independent of a final grade and is provided so that students can assess and enhance their own learning. Students can make direct use of this type of feedback within the class and on the assignment in question (Chickering & Gamson, 1987; Fluckiger et al., 2010; Goldstein, 2004). As contrasted with feedback given at the end of the academic term or accompanying a final grade on an assignment, iterative or formative feedback is given in time for students to revise and improve their work, and can be used by the instructor to formulate corrective instruction (Guskey, 2003). For example, the instructor can see the most common misunderstandings and prevalent deficiencies and address these during class time.

When students are allowed to revise their work based on instructor feedback, they can be more involved in their own learning and actively use the feedback they receive to change their learning tactics and achieve their goals (Popham, 2008; Schunk & Swartz, 1993; Stiggins, 2005). It has been argued that feedback given only at the end of a learning cycle (such as at the end of a term or when a grade is assigned) is not effective for furthering student learning (Bollag, 2006). However, feedback given prior to the final grading of an assignment can motivate and inform students seeking to improve. Ferris (1995), for example, found that ESL (English as second language) students indicated through a survey that they were more likely to pay attention to instructor feedback provided on preliminary drafts versus final products. The students reported greater appreciation for the feedback and they found it more useful for improving their writing.

Some universities require academic departments to offer courses that include a revise–review–resubmit component, most commonly in courses designated as writing intensive. It is believed that such activity is instructional and potentially leads to generalized, long-term improvement in writing. While results are not uniformly positive (F. Hyland, 2003; Stellmack, Keenan, Sandidge, Sippl, & Konheim-Kalkstein, 2012), most agree that the practice of instructors providing written comments and allowing students to revise based on those comments is helpful to students (Goldstein, 2004).

Based on the above, we expect that students will receive feedback more positively when they have the opportunity to use the feedback to revise and improve their work on the current assignment. Students should be more receptive because feedback conveys information that can be used for improvement. While seeing a large number of critical comments might still be met with some negativity, students may also recognize the utility in being provided with constructive feedback for revision. Thus, we believe students will react more positively to critical feedback when they have the opportunity to use the feedback to revise an assignment for a better grade than when they do not have that opportunity.

Research Question 3: How will student reaction to critical feedback differ when the feedback is provided by peers versus when the feedback is provided by the instructor?

Finally, this study asks what would happen if critical feedback was provided by students’ peers rather than by the instructor. One way in which some instructors hope to relieve some of their workload without diminishing the amount of feedback they provide to students is to incorporate peer feedback into their assignments.

Peer feedback has long been used effectively in a wide range of disciplines and for a variety of types of assessment (Gopinath, 1999; McGourty, Dominick, & Reilly, 1998). The results of some studies of peer feedback are quite encouraging. Peer feedback offers the possibility of relieving some of the instructor’s workload without sacrificing quality and perhaps offering a different perspective as well (Hughes, 1995). For example, Cho, Schunn, and Wilson (2006) found that when students were given instruction, provided with carefully constructed rubrics, and afforded clear incentives to take the task of grading seriously, peer evaluation and grading of written work was as reliable and valid as evaluation provided by instructors. Although preparing students in the way these authors describe would require considerable effort, the practice may hold much potential for relieving an instructor’s overall workload. Ryan, Marshall, Porter, and Jia (2007) found that peer ratings of class participation differed from instructor ratings but not enough to result in a change of grade, and the authors attributed much of the difference to the design of the evaluation process. Yangin Eksi (2012) found similar improvement in writing between students who received peer feedback and those who received instructor-provided feedback. The peer review process relieved a great deal of instructor workload and students reflected a positive attitude toward the peer review process.

A positive aspect of peer feedback is that it will likely be perceived as less threatening than instructor feedback. Since peers usually have no power to assign grades, students who wish to improve their performance on an assignment can appreciate their peers’ feedback without fear of penalty. Students can receive constructive feedback and provide the same to classmates without immediate risk to their grade. Since peer feedback should have positive informational value, but should elicit none of the negative reaction of instructor feedback, we believe that higher levels of peer feedback will elicit a more positive reaction from students than will lower levels of peer feedback.

By contrast, instructor feedback contains an element of threat because it is from the instructor. The negativity effect and self-serving bias discussed for Research Question 1 are likely to have the greatest impact when instructors are dispensing feedback on student performance. An instructor’s feedback will threaten a student’s positive self-perceptions (Ashford et al., 2003) in a way that peer feedback does not because the student knows instructor feedback is an assessment of student performance. Thus, we believe that higher levels of instructor feedback will elicit a more negative reaction from students than will lower levels of instructor feedback.

The study used a between-subjects experimental design—3 feedback levels (no feedback comments vs. 2 comments vs. 10 comments) × 2 sources of feedback (instructor vs. peer) × 2 revision possibilities (revision possible vs. revision not possible) to measure how students react perceptually to feedback in a scenario utilizing a hypothetical assignment from a course students would take in a subsequent semester. This method has been used in research on biases and attributions regarding student academic performance (Marsh, 1986). Data were collected from a broad range of business students in five introductory marketing course sections at two large public universities in the southwestern United States. Students were informed that the study would help the instructor improve instructional feedback. Data were collected 3 to 4 weeks into the semester via an online survey (n = 266). Instructors offered students extra credit for participation.

Students were asked to think about an assignment in a course they would take during a subsequent semester (Marketing Strategy) and read the statement, “You have just received a grade of B−, 80 out of 100 points, on your paper for the Marketing Strategy course.” Students then read additional information depending on which treatment they received for feedback level, source of feedback, and whether the feedback was for a draft to be revised or for a final product. Since the source of feedback treatment (instructor vs. peer) could not be utilized when the “no feedback” condition was used, an incomplete 3 × 2 × 2 factorial design was employed in this study, resulting in ten treatments to which students were randomly assigned.

The manipulations were done as follows. For the “feedback level” manipulation, students were shown no feedback comments (“no feedback” condition), 2 feedback comments (“low feedback” condition), or 10 feedback comments (“high feedback” condition). The number of feedback comments in the “low feedback” and “high feedback” conditions were derived from questioning students as to what they thought would to be a little feedback and a lot of feedback. The comments used were randomly drawn from a pool of 30 feedback comments of the type that are commonly used on marketing papers. These feedback comments were pretested with marketing students and were considered by students participating in the pretest to be equally helpful and equally fair. The comments used in the “low feedback” and “high feedback” conditions are displayed in Appendix A. Students in the “no feedback” condition simply saw no feedback comments.

It should be noted that all of the comments in Appendix A are of a critical or negative nature. In a prestudy, 83 students were presented with negative feedback comments that were preceded by mitigating positive feedback comments. This prestudy found that the addition of positive comments made no significant difference in student responses from the use of just negative comments. Perhaps students discount or ignore the positive and focus on the negative comments from instructors as the only important information with which they need to concern themselves. Positive feedback is gratifying and important, but students learn that negative feedback is where an instructor provides information for improvement.

For the manipulation of the “source of feedback,” students were informed that the comments were either from the instructor in the course or from their classmates. For the manipulation of “revision possibility,” students were informed that the feedback was either for their final product (i.e., the final version of the assigned paper with its final grade) or for a draft version that could be revised by the student to possibly improve the initial grade.

Students then completed an online questionnaire. Students given the high feedback and low feedback conditions were asked an open-ended question to elicit three words that described the hypothetical feedback they received. Students in all conditions then completed closed-ended questions pertaining to the assignment and the instructor, their attributions, their perceptions of the feedback’s fairness and helpfulness, and their emotional feelings about the assignment. The measures collected in the survey are displayed in Appendix B. All items were measured on 7-point Likert-type scales anchored by strongly disagree (1) to strongly agree (7).

Assignment and instructor-related measures were from Ackerman and Gross (2010). These were three-item measures asking whether students felt the instructor would have a negative impression of them as students (M = 2.82, α = .97), about their satisfaction with the grade (M = 2.38, α = .81), about how much they would like the instructor (M = 4.71, α = .87), and about their perception of the assignment’s difficulty (M = 3.56, α = .76).

These questions were followed by two attribution measures, modified from Dixon, Spiro, and Jamil (2001), to measure students’ attributions about performance on the assignment. The survey included a three-item measure of attribution to personal/student effort (M = 3.19, α = .92) and a three-item measure of attribution to personal/student ability (M = 3.68, α = .68). Students in both the low and high feedback conditions were then asked to respond to two three-item measures concerning the feedback given on the hypothetical assignment. The first measure inquired about the fairness of the feedback (M = 4.15, α = .86) while the second measure asked about the helpfulness of the feedback (M = 4.20, α = .90).

The survey also contained measures related to emotions associated with thinking about the hypothetical assignment, modified items from the emotion inventories of Burke and Edell (1989) and Richins (1997). The specific emotions measured were anger, exhilaration, frustration, glad, regret, and resentment. Factor analysis was performed on these emotion measures, which reduced the six individual emotions to three dimensions: a three-item measure of anger (M = 3.28, α = .81), a two-item measure of happiness (M = 3.31, α = .80), and a one-item measure of regret (M = 3.29). The survey concluded with basic demographics, cumulative college GPA, and manipulation checks to ensure that students perceived correctly the level of feedback, source of feedback, and ability to revise associated with their treatment.

Preliminary analysis revealed that the following dependent variables were significantly correlated with the student’s cumulative GPA at or below the .05 alpha level: student satisfaction with grade (r = −.29), like instructor (r = −.14), perceived difficulty of assignment (r = −.20), attribution to personal/student effort (r = −.17), attribution to personal/student ability (r = −.23), and regret (r = −.19). When analyzing the main and interaction effects of these dependent variables, we used N-way analysis of covariance while using cumulative GPA as a covariate. This statistical procedure allowed us to remove the bias that is caused by the cumulative GPA variable (Field, 2013). When analyzing the main and interaction effects of dependent variables that were not significantly correlated with cumulative GPA (namely, perceived instructor impression, fairness of feedback, helpfulness of feedback, anger, and happiness), we used N-way analysis of variance. Although all of the measures in Appendix B were treated as dependent variables, our discussion will focus mainly on those dependent variables that revealed statistically significant results that occurred at or below the .05 alpha level.

Main effects were found for only two of our independent variables, namely, level of feedback (none vs. low vs. high) and revision possibility (revision possible vs. revision not possible). No main effects were found for the source of feedback variable (instructor vs. peer).

Two-way interactions were examined for three combinations of variables, namely, “level of feedback by source of feedback,” “level of feedback by revision possibility,” and “source of feedback by revision possibility.” Significant interaction effects were found only in the first combination, that is, “level of feedback by source of feedback.” No significant interaction effects were found for “level of feedback by revision possibility” or for “source of feedback by revision possibility.” No three-way interaction effects were found. Details of the significant effects are summarized below.

Research Question 1: How will students react to differing levels of critical feedback?

The main effects for “level of feedback,” including the “no feedback” condition, on the dependent variable measures discussed above are displayed in Table 1. For the most part, the trends exhibited by the data on each row of Table 1 reveal that as the level of feedback went from “none” to “low” to “high,” students’ perceptions of the feedback became more negative. That is, students perceived that the instructor’s impression of them was more negative (x¯no feedback = 2.81, x¯low feedback = 3.51, x¯high feedback = 4.12), students were less likely to like the instructor (x¯no feedback = 5.04, x¯low feedback = 4.76, x¯high feedback = 4.39), students made less attribution to personal/student effort (x¯no feedback = 4.28, x¯low feedback = 3.66, x¯high feedback = 3.46), students were more likely to feel anger (x¯no feedback = 2.97, x¯low feedback = 3.19, x¯high feedback = 3.61), and students were less likely to feel happiness (x¯no feedback = 3.68, x¯low feedback = 3.53, x¯high feedback = 3.01). Perceived helpfulness of the feedback (x¯low feedback = 4.09, x¯high feedback = 4.73) and having less regret (x¯no feedback = 3.67, x¯low feedback = 3.26, x¯high feedback = 3.08) were the only positive features that students indicated as level of feedback increased. When level of feedback was varied from “none” to “low” to “high,” no significant main effects were found for the following dependent variables: satisfaction with grade, perceived difficulty of the assignment, attribution to student ability, and fairness of feedback.

Table

Table 1. Means of Main Effects for Level of Feedback (n = 266).

Table 1. Means of Main Effects for Level of Feedback (n = 266).

Research Question 2: How will student reaction to critical feedback differ when students are provided an opportunity to revise their work versus when they are not provided an opportunity to revise their work?

Significant main effects for the “revision possibility” variable on the dependent variables are displayed in Table 2. For the most part, the main effects support the notion that students would react more positively when they could use the feedback to revise their work than when they could not. When students were in the “revise” condition, they were more likely to like the instructor (x¯revise = 4.88, x¯no revise = 4.50), less likely to believe that the instructor had a negative impression of them (x¯revise = 3.30, x¯no revise = 3.84), and less likely to experience anger (x¯revise = 2.97, x¯no revise = 3.62). Moreover, those in the “revise” condition were more likely to indicate that the feedback they received was both fair (x¯revise = 4.62, x¯no revise = 3.97) and helpful (x¯revise = 4.60, x¯no revise = 4.01). Somewhat surprisingly, the main effects also reveal that those in the “revise” condition were less satisfied with their grade (x¯revise = 2.80, x¯no revise = 3.34) and were less likely to attribute the results to their own effort (x¯revise = 3.54, x¯no revise = 3.85). No main effects were found with the following dependent variables: perceived difficulty of the assignment, attribution to student ability, happiness, and regret.

Table

Table 2. Means on Measures for Revision of Assignment Possible or Not Possible (n = 266).

Table 2. Means on Measures for Revision of Assignment Possible or Not Possible (n = 266).

Research Question 3: How will student reaction to critical feedback differ when the feedback is provided by peers versus when the feedback is provided by the instructor?

No main effects were found when the “source of feedback” variable was examined. However, when interactions between the “source of feedback” and “level of feedback” variables were examined, eight interaction effects were found. These effects are displayed in Table 3 and described below.

Table

Table 3. Means on Measures for Source and Level of Feedback Interaction Effects (n = 266).

Table 3. Means on Measures for Source and Level of Feedback Interaction Effects (n = 266).

In general, the interaction results suggest that students’ reactions to the level of feedback comments are influenced by whether they receive the feedback comments from instructors or from peers. When peers provide feedback, the only dependent variable that reveals a significant difference between the low and high feedback conditions is “helpfulness of feedback.” That is, students in the high feedback condition were more likely than students in the low feedback condition to indicate that the peer feedback was helpful (x¯low feedback = 4.00, x¯high feedback = 5.10). In contrast, when low and high feedback conditions were compared for the instructor feedback condition, seven significant differences were found. That is, when instructor feedback was furnished, students in the high feedback condition were more likely than students in the low feedback condition to believe that the instructor had a negative impression of them (x¯low feedback = 3.56, x¯high feedback = 4.86), to believe the assignment was difficult (x¯low feedback = 4.04, x¯high feedback = 4.45), and to feel anger (x¯low feedback = 3.04, x¯high feedback = 4.04). Moreover, students receiving a high level of instructor feedback were less likely to like the instructor (x¯low feedback = 4.85, x¯high feedback = 3.94) and were less likely to express regret (x¯low feedback = 3.54, x¯high feedback = 2.79). Finally, students who were in the instructor low feedback condition were more likely to attribute their grade on the assignment to their own ability (x¯low feedback = 3.80, x¯high feedback = 3.22) and their own effort (x¯low feedback = 4.37, x¯high feedback = 3.32).

The results of the open-ended question, displayed in Table 4, support these findings. Words provided by student subjects in the peer feedback condition were quite similar between the low and high feedback conditions. By contrast, comments provided in the instructor feedback condition were fairly dissimilar between the low and high levels of feedback. Consistent with the results in Table 3, there was a tendency for students to describe a high level of instructor feedback using more negative terms.

Table

Table 4. Open-Ended Responses Describing the Feedback by Source and Level of Feedback (≥5% of total).

Table 4. Open-Ended Responses Describing the Feedback by Source and Level of Feedback (≥5% of total).

The results of this study suggest that receiving critical feedback generally elicits a negative response when it comes from an instructor. A high level of instructor-provided feedback was generally perceived more negatively by student subjects than was a low level of instructor-provided feedback. Students overall felt angrier and less happy when they received a larger number of feedback comments than when they received a low number of comments or no feedback. Regret was the only negative emotion that was higher for lower levels of feedback.

These negative feelings extended into how much students liked the instructor. Students liked the instructor best when they received no feedback, less when they received a low level of feedback, and quite a lot less when they received a high level of feedback. Likewise, students were more likely to feel that the instructor had a negative impression of them as students when they received a high level of feedback comments versus a low level or no feedback.

On the other hand, high levels of critical feedback were more acceptable when the feedback came from other students. Students reacted less negatively to a high level of peer-provided feedback than to a high level of feedback provided by the instructor. Responses to peer-provided feedback were for the most part similar between the low feedback and high feedback conditions. In addition, students perceived high levels of feedback from peers to be more helpful than low levels. By contrast, they did not perceive high levels of feedback to be more helpful than low levels when the instructor provided the feedback.

Despite the negative reaction to large amounts of feedback provided by instructors, some of our findings show that students may appreciate critical information provided by an instructor. In our study, a lower level of feedback from the instructor led students to feel higher levels of regret. Perhaps this result is due to thinking about forgone opportunities to earn a better grade. If a student is simply given a letter grade of B−, he or she may be fairly content and like the instructor, but at the same time wonder how he or she might have earned a better grade. An explanation for a grade in the form of feedback comments may alleviate feelings of regret as students understand more specifically why they earned their particular grades.

The open-ended comments regarding the feedback support the above findings. In the low feedback conditions, regardless of whether the feedback came from peers or from the instructor, students felt that the comments were vague and unclear. This situation could make students feel regret since they are given little guidance as to how to improve their performance. By contrast, when student subjects received more feedback comments from the instructor, they saw the comments as clearer but they also felt more anger. This pattern is similar to the appraisals of certainty and uncertainty, with uncertainty leading to more fear-related emotions and certainty leading to more anger-related emotions (Han, Lerner, & Keltner, 2007). When students receive a high level of feedback they may be more certain about what they did wrong and also feel more anger about the assignment and toward the instructor.

It is not surprising that student subjects generally had more positive reactions to feedback when they were able to use the feedback to revise their work. Students in the revision is possible condition had a more positive impression of the instructor as well as of the feedback itself and reacted with lower levels of anger. More surprising is that the students were more satisfied with their grades when they could not revise the assignment than when they were offered the opportunity to revise to improve their grade. Perhaps providing students an opportunity to revise their work signals to students that they should be less satisfied with their existing grade. Students can foresee the possibility of receiving a higher grade if they undertake the effort to improve their work. However, when students know they have received their final grade on an assignment, they are more likely to accept that grade. Perhaps when offered the possibility to revise an assignment to earn a higher grade, students may be motivated to imagine that their initial performance and grade would have been better had they put more effort into the original submission.

Results from the analysis of source and level of feedback (Table 3) also suggest a self-serving bias. Student subjects attributed their grade on the hypothetical assignment more to their own ability when there was a low level of instructor-provided feedback than when there was a high level of instructor-provided feedback. When students receive little feedback from an instructor they may believe that little is wrong with their work, and this belief may help students to maintain a positive self-perception. However, when students receive a lot of critical feedback from an instructor, this feedback may threaten their positive self-perception, making them more likely to blame external causes rather than internal causes such as effort and ability.

These findings have important implications for instructors as they consider providing feedback to students. First, the results of this study suggest much promise in asking students to provide feedback to one another. Not only does the use of peer feedback potentially reduce the burden on the professor, but our study suggests that students find a high level of feedback provided by peers to be most helpful and it is received less negatively than the same high level of feedback provided by instructors. From a student’s perspective, receiving critical feedback from peers may feel less threatening and truly more helpful and constructive.

One possible use of peer feedback is to use it as a complement to instructor-provided feedback. For example, an instructor might put students together in groups to exchange individual or team-based papers and give feedback to one another. Hearing or reading peers’ assessments of the strengths and weaknesses of one’s work can raise awareness for what can be done to improve, and may also result in significant improvements to the final product prior to turning in the assignment for a final grade. If peer feedback on early drafts is retained with the assignment, the instructor can indicate in the final grading how well the students utilized the feedback they had already received, rather than recreating those same types of comments. Students can benefit from both reviewing the work of others and receiving comments from others, and the instructor can be alleviated of some of the burden of providing extensive comments. Of course, effective use of peer feedback requires that students take the task seriously. Instructors will undoubtedly need to train the students to meet standards established by the instructors. This preparation might be accomplished by providing instructor-created rubrics and demonstrating and practicing their use during class, perhaps using assignments from previous semesters with names redacted.

Second, it appears that when instructors provide critical feedback, students would prefer to have less feedback. When students in our study received a higher level of feedback on a hypothetical assignment, they had more negative reactions. Students also believed the high level of feedback was no more helpful than a low level of feedback. It appears that students react negatively to feedback despite the fact that feedback is a positive part of the learning process (Chickering & Gamson, 1987, 1999). So, on the one hand, instructors may want to provide more critical feedback for the sake of fulfilling their duty as educators and to be thorough; but on the other hand, students will respond more positively and provide more positive student evaluations of teaching when offered a lesser amount of feedback.

To maximize the value of the feedback that marketing instructors work so hard to provide, we suggest that instructors may want to carefully choose what to comment on, concentrating on providing only the most important feedback. Although this adjustment might at first feel uncomfortable for instructors accustomed to writing many comments, it promises to also be less time consuming and less physically taxing than writing out or even inputting electronically many comments, and has the additional benefit of being received more favorably by students.

It appears to be especially advisable for instructors to limit their critical feedback on assignments that cannot be revised by the student. If there is no opportunity for students to improve their work on an assignment—that is, if the feedback comments are being offered as an explanation for a final grade—our findings suggest that there may be no negative consequence, and possibly a positive consequence, of reducing the amount of feedback to just a few summary comments. If students are more receptive to the feedback comments, they might also be more likely to remember the feedback and apply it to future work.

A third implication is that instructors may want to be cautious about the possible effects of allowing students to revise work based on early feedback. Although students do seem to like having the option to revise or redo assignments to improve their grade, allowing students to revise their work tends to make them feel less satisfied with their current grade and anticipate that their revision will result in a higher grade. When students are allowed to revise their work, there is no guarantee that their revision will be successful. If they produce an inadequate revision, they could receive a final grade that is below their now heightened expectation, causing them to feel anger and have other negative feelings toward the instructor.

This study is limited by the fact that it is based on placing students in a hypothetical situation that mirrors an educational experience. It is unclear the extent to which students’ responses to our hypothetical situation will duplicate what they would actually do or feel when receiving actual feedback on a real assignment. Despite this limitation, the study has a number of strengths. First, since student subjects were randomly assigned to the various treatment cells, our experiment should not have incurred any significant selection bias. Second, a healthy sample size (n = 266) and our use of cumulative GPA as a covariate during some of our analyses provided us with ample statistical power. Third, all of our experimental manipulations were carefully designed and pretested for clarity. Finally, our subjects were screened for whether they could correctly identify the experimental treatment to which they were exposed. Only subjects who correctly identified the conditions comprising their experimental treatment were analyzed in the study.

Finding that higher levels of instructor-provided feedback generally elicited more negative reactions, this study does raise the troubling question of how marketing instructors can best share their expertise to further students’ learning. It also partially contradicts Ackerman and Gross (2010), which found that a moderate level of feedback elicited the most positive reactions in students. The differences between the sample in the current study, which used student subjects enrolled in a basic marketing course, and the one used in the Ackerman and Gross (2010) study, which used student subjects enrolled in an upper division marketing course (Consumer Behavior), suggest that there may be moderating variables that can influence the optimal level of feedback for students. An interesting area for future research is to investigate how interest in or self-efficacy regarding a course might affect how students respond to feedback and how much feedback they desire. Perhaps students taking an elective or advanced class in their chosen major will desire a moderate or even high level of feedback on their work as they develop more self-efficacy in their chosen major and/or regard more advanced coursework with more interest. On the other hand, students in a basic required or core course might generally respond more negatively to any level of feedback.

Another way to address the question of how to best provide feedback is to examine how the thoroughness of feedback and the specificity or fit of the feedback for improvement affects its reception by students. For example, general feedback that explains that a student’s writing does not meet expectations may be received differently from feedback that not only points out the poor writing but also illustrates exactly how the writing can be improved.

Also, one could research how feedback is affected by the initial or final grade that is earned on an assignment. In our study, we assigned a B− grade to the hypothetical paper. Many students may have found this grade acceptable, causing them to have little concern for the feedback that was provided. Perhaps a lower grade or a grade lower than expected would have given students more motivation to examine the feedback, and they may have reacted differently.

Future researchers could also experiment with the form of the feedback. Feedback can be typed or handwritten, and it can be delivered on paper, by computer, or orally. Moreover, the degree to which feedback comments are personalized could impact the results. How, for example, would preceding each critical comment with the student’s first name affect the student’s reception of the feedback? Above and beyond the question of how feedback should be delivered is the question of whether it need always be provided. Since some students ignore feedback and are angered by it, there may be situations where it is productive for instructors to provide feedback only to those students who request it. Providing feedback only to those who request it, particularly when there is no further opportunity for students to use the feedback for revision and improvement on an assignment, can potentially reduce the instructor’s workload while still providing feedback to those students who will appreciate it.

Feedback Comments Used in the Low and High Feedback Conditions

Low Feedback Condition

  • “Analysis is not clear.”

  • “Awkward writing—edit for clarity and flow.”

High Feedback Condition

  • “No clear thesis statement.”

  • “Analysis is not clear.”

  • “Evidence and analysis need to be presented.”

  • “Does not follow instructions for the assignment.”

  • “Misuse of concepts.”

  • “Shows misunderstanding of theory.”

  • “Awkward writing—edit for clarity and flow.”

  • “No clear conclusion.”

  • “No evidence presented to support idea.”

  • “Recommendations are not realistic based on the situation.”

Dependent Measures

Assignment and Instructor-Related Measures

  • Perceived Instructor Impression

  •  1a. This instructor probably has a negative impression of my abilities.

  •  1b. This instructor likely thinks I’m not able to perform well in this course.

  •  1c. This instructor probably thinks I just can’t do the work in this course.

  • Student Satisfaction with Grade

  •  2a. I would be satisfied with my grade.

  •  2b. I would be pleased with my grade.

  •  2c. I would be content with my grade.

  • Like Instructor

  •  3a. I really like this instructor.

  •  3b. I would recommend this instructor to others.

  •  3c. This is a great instructor.

  • Perceived Difficulty of Assignment

  •  4a. I think this assignment must have been easy.

  •  4b. I think this assignment was not difficult.

  •  4c. I think this assignment was not hard.

Attribution Measures

  • Attribution to Personal/Student Effort

  •  5a. I really worked hard on this assignment.

  •  5b. I really put forth the effort needed on this assignment.

  •  5c. I really put in the necessary time on this assignment.

  • Attribution to Personal/Student Ability

  •  6a. My performance was due to my ability, nothing else.

  •  6b. Since I had the knowledge and skills, I was successful on this assignment.

  •  6c. My performance on this assignment reflected my abilities.

Feedback Measures

  • Fairness of Feedback

  •  7a. This feedback is fair.

  •  7b. This feedback reflects fairly on the performance.

  •  7c. The instructor’s feedback is fair to me.

  • Helpfulness of Feedback

  •  8a. This feedback is really helpful.

  •  8b. I think this feedback can really help me improve.

  •  8c. I feel this feedback would be really helpful to me.

Emotion Measures

  • Anger

  •  9a. Anger

  •  9b. Frustration

  •  9c. Resentment

  • Happiness

  •  10a. Glad

  •  10b. Exhilaration

  • Regret

  •  11a. Regret

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

Ackerman, D. S., Gross, B. L. (2010). Instructor feedback: How much do students really want? Journal of Marketing Education, 32, 172-181.
Google Scholar | SAGE Journals
Ashford, S. J., Blatt, R., VandeWalle, D. (2003). Reflections on the looking glass: A review of research on feedback-seeking behavior in organizations. Journal of Management, 29, 773-799.
Google Scholar | SAGE Journals | ISI
Bollag, B. (2006). Making an art form of assessment. Chronicle of Higher Education, 56(10), A8-A10. Retrieved from http://chronicle.com/article/Making-an-Art-Form-of/17645/
Google Scholar
Burke, M. C., Edell, J. A. (1989). The impact of feelings on ad-based affect and cognition. Journal of Marketing Research, 26, 69-83.
Google Scholar | SAGE Journals | ISI
Carless, D. (2006). Differing perceptions of the feedback process. Studies in Higher Education, 31, 219-233.
Google Scholar | Crossref | ISI
Chickering, A. W., Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. American Association of Higher Education Bulletin, 39, 3-7.
Google Scholar
Chickering, A. W., Gamson, Z. F. (1999). Development and adaptations of the seven principles for good practice in undergraduate education. New Directions for Teaching and Learning, 80, 75-81.
Google Scholar | Crossref
Cho, K., Schunn, C. D., Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98, 891-901.
Google Scholar | Crossref | ISI
Conant, J. S., Smart, D. T., Kelley, C. A. (1988). Master teaching: Pursuing excellence in marketing education. Journal of Marketing Education, 10, 3-13.
Google Scholar | SAGE Journals
Czaplewski, A. J. (2009). Computer-assisted grading rubrics: Automating the process of providing comments and student feedback. Marketing Education Review, 19, 29-36.
Google Scholar | Crossref
Debuse, J. C. W., Lawley, M. (2011). Using innovative technology to develop sustainable assessment practices in marketing education. Journal of Marketing Education, 33, 160-170.
Google Scholar | SAGE Journals
Dixon, A. L., Spiro, R. L., Jamil, M. (2001). Successful and unsuccessful sales calls: Measuring salesperson attributions and behavioral intentions. Journal of Marketing, 65(3), 64-78.
Google Scholar | SAGE Journals | ISI
Faranda, W. T., Clarke, I.. (2004). Student observations of outstanding teaching: Implications for marketing educators. Journal of Marketing Education, 26, 271-281.
Google Scholar | SAGE Journals
Ferris, D. R. (1995). Student reactions to teacher response in multiple-draft composition classrooms. TESOL Quarterly, 29, 33-53.
Google Scholar | Crossref | ISI
Field, A. (2013). Discovering statistics using IBM SPSS statistics. Thousand Oaks, CA: Sage.
Google Scholar
Finkelstein, S. R., Fishbach, A. (2012). Tell me what I did wrong: Experts seek and respond to negative feedback. Journal of Consumer Research, 39, 22-38.
Google Scholar | Crossref | ISI
Fluckiger, J., Vigil, Y., Pasco, R., Danielson, K. (2010). Formative feedback: Involving students as partners in assessment to enhance learning. College Teaching, 58, 136-140.
Google Scholar | Crossref
Forsyth, D. R. (1980). The function of attributions. Social Psychology Quarterly, 43, 184-189.
Google Scholar | Crossref | ISI
Gillespie, N. A., Walsh, M., Winefield, A. H., Dua, J., Stough, C. (2001). Occupational stress in universities: Staff perceptions of the causes, consequences, and moderators of stress. Work and Stress, 15, 53-72.
Google Scholar | Crossref | ISI
Goldstein, L. M. (2004). Questions and answers about teacher written commentary and student revision: Teachers and students working together. Journal of Second Language Writing, 13, 63-80.
Google Scholar | Crossref | ISI
Gopinath, C. (1999). Alternatives to instructor assessment of class participation. Journal of Education for Business, 75, 10-14.
Google Scholar | Crossref
Granitz, N. A., Koernig, S. K., Harich, K. R. (2009). Now it’s personal: Antecedents and outcomes of rapport between business faculty and their students. Journal of Marketing Education, 31, 52-65.
Google Scholar | SAGE Journals
Gruber, T., Lowrie, A., Brodowsky, G. H., Reppel, A. E., Voss, R., Chowdhury, I. N. (2012). Investigating the influence of professor characteristics on student satisfaction and dissatisfaction: A comparative study. Journal of Marketing Education, 34, 165-178.
Google Scholar | SAGE Journals
Guskey, T. R. (2003). How classroom assessments improve learning. Educational Leadership, 60(5), 6-11.
Google Scholar | ISI
Han, S., Lerner, J. S., Keltner, D. (2007). Feelings and consumer decision making: The appraisal-tendency framework. Journal of Consumer Psychology, 17, 158-168.
Google Scholar | Crossref | ISI
Higgins, R., Hartley, P., Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27, 53-64.
Google Scholar | Crossref | ISI
Hughes, I. (1995). Peer assessment. Higher Education for Capability, 1(3), 39-43.
Google Scholar
Hyland, F. (2003). Focusing on form: Student engagement with teacher feedback. System, 31, 217-230.
Google Scholar | Crossref
Hyland, P. (2000). Learning from feedback on assessment. In Booth, A., Hyland, P. (Eds.), The practice of university history teaching (pp. 233-247). Manchester, England: Manchester University Press.
Google Scholar
Jones, E. E., Davis, K. E. (1965). From acts to dispositions: The attribution process in person perception. In Berkowitz, L. (Ed.), Advances in experimental social psychology (Vol. 2, pp. 219-266). New York, NY: Academic Press.
Google Scholar | Crossref
Marsh, H. W. (1986). Self-serving effect (bias?) in academic attributions: Its relation to academic achievement and self-concept. Journal of Educational Psychology, 78, 190-200.
Google Scholar | Crossref | ISI
McBane, D. A. (1996). Using technology to increase feedback when grading assignments. Marketing Education Review, 6(2), 45-58.
Google Scholar | Crossref
McGourty, J., Dominick, P., Reilly, R. R. (1998). Incorporating student peer review and feedback into the assessment process. Frontiers in Education Conference Proceedings, 1, 14-18.
Google Scholar | Crossref
Miller, D. T., Ross, M. (1975). Self-serving biases in the attribution of causality: Fact or fiction? Psychological Bulletin, 82, 213-225.
Google Scholar | Crossref | ISI
Peltier, J. W., Drago, W., Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25, 260-276.
Google Scholar | SAGE Journals
Peltier, J. W., Schibrowsky, J. A., Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29, 140-153.
Google Scholar | SAGE Journals
Popham, W. J. (2008). Transformative assessment. Alexandria, VA: Association for Supervision and Curriculum Instruction.
Google Scholar
Richins, M. L. (1997). Measuring emotions in the consumption experience. Journal of Consumer Research, 24, 127-146.
Google Scholar | Crossref | ISI
Ryan, G. J., Marshall, L. L., Porter, K., Jia, H. (2007). Peer, professor and self-evaluation of class participation. Active Learning in Higher Education, 8, 49-61.
Google Scholar | SAGE Journals
Schunk, D. H., Swartz, C. W. (1993). Writing strategy instruction with gifted students: Effects of goals and feedback on self-efficacy and skills. Roeper Review, 15, 225-230.
Google Scholar | Crossref
Smart, D. T., Kelley, C. A., Conant, J. S. (2003). Mastering the art of teaching: Pursuing excellence in a new millennium. Journal of Marketing Education, 25, 71-78.
Google Scholar | SAGE Journals
Stellmack, M. A., Keenan, N. K., Sandidge, R. R., Sippl, A. L., Konheim-Kalkstein, Y. L. (2012). Review, revise, and resubmit: The effects of self-critique, peer review, and instructor feedback on student writing. Teaching of Psychology, 39, 235-244.
Google Scholar | SAGE Journals | ISI
Stiggins, R. J. (2005). Student-involved assessment for learning (4th ed.). Upper Saddle River, NJ: Pearson.
Google Scholar
Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education, 31, 379-394.
Google Scholar | Crossref
Yangin Eksi, G . (2012). Peer review versus teacher feedback in process writing: How effective? International Journal of Applied Educational Studies, 13, 33-48.
Google Scholar

Article available in:

Related Articles

Citing articles: 0