Objectively assessing student creative work in the fields associated with mass media can be problematic. Communicating expectations to students, as well as providing them with a clear yet flexible rubric for evaluation of copywriting, newswriting, audio production, video production, and web-design, requires examination of the relevant student learning outcomes. This article explores the process of rubric design for digital portfolio evaluation, including the areas mentioned, with a goal of finding appropriate measures that will be effective for conveying the expectations for each area to students and providing precise evaluation and feedback for both grading and assessment purposes.

“Assessment is an important part of evaluating individual students, and has become an important part of programmatic evaluation” (Christ & Broyles, 2008, p. 393). In assessment, however, performance of any kind can be a complex concept to attempt to evaluate objectively. In electronic media programs, that performance can be in writing, audio, video, web-design, or any number of other areas. Each sample created by a student is unique to that individual performer, that performer’s skill set, and his or her professional goals. As such, the development of any rubric to accurately and objectively measure the knowledge and ability of the performer, as well as the quality of the performance, must be crafted with that inherent uniqueness in mind.

At the front of the issue is the concern that many teachers experience when trying to evaluate performance-based material: attempting to perform an evaluation of creative work with a level, impartial, objective eye. Retired college professor John Tierney (2013), writing for The Atlantic, agrees:

No matter how hard you try, you realize there’s a good chance you’re grading some students more harshly than they deserve, and giving others more credit than they deserve. It doesn’t have anything to do with favoritism . . . but with human error and weakness.

Tierney goes on to say that state of mind, emotional state, other responsibilities, even grading on a specific day versus another, can have an effect on the fair and accurate assessment of a student’s material (Tierney, 2013). Add to that a faculty member’s concerns about how his or her evaluation compares with another colleague’s, and a perpetual uncertainty is created. If a rubric could be created that would reduce the level of uncertainty, the task of evaluating student work could be made to feel a bit more structured and evenhanded.

A rubric is, according to Popham (1997), “a scoring guide used to evaluate the quality of student’ responses—for example, their written compositions, oral presentations, or science projects” (p. 72). Reddy and Andrade (2010) describe a rubric as “a document that articulates the expectations for an assignment by listing the criteria or what counts, and describing levels of quality from excellent to poor” (p. 435).

Both Popham (1997) and Reddy and Andrade (2010) agree that, to create an effective rubric for assessment purposes, specific criteria to be measured must be determined, as well as the criteria’s descriptor. The role of the measured elements in the overall program, however, must also be determined. Often, there is uncertainty as to which should come first. Madeja (2004) describes the problem as one of deciding “whether to design curriculum and then evaluate it based on criteria and techniques applicable to its content or based on its design evaluation devices, which dictate the content” (p. 5).

If student learning outcomes are already established, the logical first step in designing a rubric must be determining the essential skills to be measured (Tractenberg, Umans, & McCarter, 2010). If the rubric is to function for individual portfolios as well as program-level assessment, the skills must be represented in specific student learning objectives (SLO). Once the appropriate SLOs are identified, the next step is deciding how they will be measured. When student learning outcomes are being measured for senior portfolios in a capstone course, the results should be indicative of not only the student’s individual performance on the assignment but also the student’s skill level and mastery of related outcomes for the overall academic program.

The purpose of this work is to look at the issue of rubric creation for electronic media activities with fresh eyes, from a fresh perspective. The decision to not include existing media-related rubric research was deliberate in an attempt to ensure no bias to, or influence by, already-suggested methods of mediated content evaluation via rubric.

For each student learning outcome to be measured, a consistent, objective evaluative system must be created that will allow fair assessment of a student’s abilities, regardless of what their specific focus within the overall area of electronic media may be (Ciorba & Smith, 2009; Morgan, 1999). Although quantitative rubrics, often utilizing Likert-type scale measurement, are the easiest, when addressing creative work, more depth is needed in both the evaluation of and feedback on the student’s work (Ciorba & Smith, 2009; Morgan, 1999; Saunders & Holahan, 1997). Tractenberg et al. (2010) suggests a four-level evaluative model, from 1 (beginner) to 4 (proficient). Ciorba and Smith (2009) indicate a preference for a 5-point scale in their assessment of skill level in musical performance. Each of the aforementioned rubrics, however, contain very specific expectations for student performance. Each evaluation allows students to understand how they are to be graded, and the resulting information can then be used to evaluate individual performance while still supplying program-level assessment data.

Assessing Performance

The objective evaluation of creative work is likely a challenge faced by teachers around the world, in a variety of pedagogical areas. Elements of evaluation from areas of live performance and production provide suggestions for the construction of a rubric for measuring skill and knowledge in electronic media performance and production. Any type of creative work, such as in art, music, theater, or media, offers a different set of academic challenges when compared with more structured discipline such as math or any of the hard sciences (Gale & Bond, 2007). Regardless of the hurdles, researchers have concluded that assessment instruments, tied directly to specific criteria, can be created to allow for the assessment of creative work (Lau, 2011; Saunders & Holahan, 1997; Soep, 2005).

The challenge of assessing performance and production elements of creative work lies in ascertaining how well students have learned the fundamentals of their craft, “namely, those abilities and capacities for artistic understanding, production, interpretation, analysis, and above all, literate engagement” (Gale & Bond, 2007, p. 126). At the same time, one must be careful that any institutionalized assessment does not force students to simply parrot one particular analytical perspective or style (Gale & Bond, 2007).

Rubrics dealing with creative content or performance tend toward Likert-type scale measurement, where categories of proficiency are designated and assigned point values (Tractenberg et al., 2010). This fulfills the necessity of creating objective categories within which to assign skill and ability but, without a clear and obvious distinction between the levels of measurement, students will not understand how their work is to be assessed, nor will they receive useful feedback from the assessment instrument. According to Reddy and Andrade (2010),

Used as part of a student-centered approach to assessment, rubrics have the potential to help students understand the targets for their learning and the standards of quality for a particular assignment, as well as make dependable judgments about their own work that can inform revision and improvement. (p. 437)

Electronic Media Performance and Production Assessment

When assessing a student’s progress toward completion of student learning outcomes in an electronic media program, there are three separate areas currently being assessed, each one bringing its own hurdles and unique challenges: Writing for Media, Mediated Performance and Production (Audio and/or Video), and Web-Development and Design. The artifact used as a model for this study is the final digital portfolio in the Senior Seminar capstone course at a medium-sized Midwestern university. The course has a basic grading rubric for the final portfolio project, but with recent updates to their program’s assessment plans, and the results of several cycle reports in, a new rubric has been suggested.

Peter Orlik (2010), in speaking about media copywriting, suggests that, as an art-form, media writers must find a balance of both form and content to create successful copy. This division of form and content is also true for all three areas of student media work. Each area has its own form rules and content expectations. Gale and Bond (2007) explain that support for assessment of the arts within coursework does not necessarily point directly toward professional endeavors, but instead aims “for a working knowledge, an aesthetic literacy that will stand students in good stead for their lives as citizens, cultural consumers, audience members and (with any luck) season subscribers” (p. 129).

The evaluation of form is a relatively straightforward process. For each content area, there are firm rules for its evaluation. For writing, the rules involve spelling, grammar, and sentence structure (Orlik, 2010). For production, they are often focused on composition, tone, lighting, shot choices, and audio levels. For web-design, adherence to coding rules, recognized design conventions, and usability considerations are tantamount (Krug, 2009; Castro & Hyslop, 2013). Such skills are most often the first taught in any media class, and can be assessed without much concern for possible bias or uneven grading. While creativity is important, a mastery of the appropriate fundamental skills must be gained before the content elements can flourish (Keding, 1988).

Once form has been evaluated, content is most often comparatively evaluated against appropriate professional standards. These standards are used as the benchmark for assessing a student’s ability in writing, production, or performance. The concept of professional standards, however, is ephemeral at best. So the question becomes “How can one best create a rubric that truly reflects the concept of professional standards in each area of modern media performance and production?”

Creating the Effective Rubric

Trying to objectively define professional standards fall in the same category as the saying “I may not know art, but I know what I like.” As media educators, many of whom have been media professionals at one time or another, faculty members develop an innate feel, based on experiences as a viewer and as a content creator, for when something is good enough to be published through any number of different forms of media. For the purposes of this research, professional standards is defined as follows: Student work of high enough quality that, if produced for a professional media organization, would be air-able or otherwise publishable.

There are no standardized industry metrics of professional standards for content creation, however. Such metrics vary depending on media and market, so each faculty member is left to her own judgment when attempting to find the line between good enough and not good enough.

Based on current research and a comparison of selected existing rubrics, used as examples of those found in other creative disciplines, it becomes clear that a more explanatory rubric must be developed and implemented. Rubrics were selected that possessed similar expectations to those desired when evaluating media-based digital portfolios. Rubrics dealing with art, creative writing, vocal and instrumental performance, non-media-oriented electronic portfolios, and teacher education were evaluated for elements that could be used as models for an electronic media rubric (Ciorba & Smith, 2009; Cope, Kalantzis, McCarthey, Vojak, & Kline, 2011; Evans, Daniel, Mikovch, Metze, & Norman, 2006; Gale & Bond, 2007; McLaren, 2012; Saunders & Holahan, 1997; Soep, 2005; Spence, 2010).

Essential rubric contents

Regardless of which type of performance is being evaluated, one criterion measured in the majority of them is the ability to communicate critically. Gale and Bond (2007) explain how “students must be able to discuss their own work cogently and critically, deepening others’ understanding of that work and communicating not only their intentions but also to what extend they achieved their creative goals” (p. 142). They further indicate that faculty members must use a similar level of critical communication in evaluating and assessing student work.

Saunders and Holahan (1997) provide an example of a 10-point scale for use in assessing a student’s performance for instrumental music which range from the fullest, most accurate performance to an uncharacteristic, inaccurate attempt. Soep (2005) explains that critique must be part of assessment of any form of art, pointing out:

Critique demands that participants express their assessments to the very people responsible for the work under discussion. It feeds forward, in that artists use insights from critique to make further adjustments to the specific project under discussion, to reject the suggested changes, or to apply new ideas to their future efforts. . . . . Teachers who practice performance-based approaches to assessment frame evaluation as an episode of learning. (p. 40)

Spence (2010) details the development of a 6-point rubric that represents “low performance” to “exceeds expectations” for areas such as “ideas, organization, voice, word choice, sentence fluency, and conventions” (p. 338). Ciorba and Smith (2009) present a model 5-point rubric for vocal undergraduate performance juries that serve as a template for the rubric created herein.

Once the methods and designs used in the example rubrics were evaluated, a means of assessing appropriate student learning outcomes that would provide additional information to the students was created. Each segment of the new rubric (writing, audio & video, and web) was addressed separately, then combined into a single rubric usable for assessing both a student’s final digital portfolio and student’s level of success in achieving the relevant student learning outcomes for the program as whole.

Within the rubric proposed, three areas of emphasis have been created with a separate point scale determined for each. The rubric presupposes both that writing is an essential element included in the student’s education and should be measured regardless of area of emphasis, and that the student portfolio is being done as a website, and thus must contain web-based content.

Rubric Modeling

Specific elements from several of the rubrics examined are used as models for the creation of the sample rubric included herein for evaluating media capstone projects. Palmquist (1997) and Hafner and Hafner (2003) provide examples of layout and give a starting point for determining the quantitative elements of the rubric. Tractenberg et al. (2010) offer insight into the relationship between rubric-based assessment and course learning objectives. Research by Flowers and Hancock (2003), Mansilla, Duraisingh, Wolfe, and Haynes (2009), Daniels (2010), and Reddy and Andrade (2010) contribute to an understanding of how to write the evaluative elements within each row of the rubric. Popham (1997) authors what is essentially the guide used to move from first draft to final version of the rubric by examining what a rubric should and should not be, as well as a rubric’s role in student learning, student evaluation, and assessment.

Rubric Key

The result of the evaluative modeling from the selected examples is a holistic, task-oriented rubric (Christ, 2014). It focuses on three separate areas of ability, a montage evaluation, and an overall evaluative block at the end. A student is encouraged to declare one of three areas as their primary emphasis for the capstone digital portfolio project: Writing, Media Content (Audio and/or Video), or Web-Design. The application method for the rubric developed for each area is detailed below.

Writing-based

The writing-based portfolio rubric has the highest emphasis on writing skills at 30% impact; followed by 20% impact each for mediated content (writing samples for audio/video specifically in this case), web-design for overall composition and presentation, and for the overall impact of the portfolio; and 10% impact for the creation of a montage.

For a writing-based portfolio, a montage would be a preview of the content within the portfolio which would allow a visitor, on first glance, to understand the depth and variety of material deeper in the portfolio. The method likely will vary, it is the effectiveness that will be evaluated.

Mediated content-based

The mediated content-based (audio and/or video) portfolio rubric re-adjusts the percentages to favor the inclusion of audio and/or video content (with samples of work accessible as part of the online portfolio) at 30% impact; followed by 20% impact for writing (copywriting, newswriting, audio segment writing, or video segment writing), web-design for overall composition and presentation, and the overall impact of the portfolio; and 10% impact for the creation of a montage of production content as a preview of content to be found within the portfolio.

Web-design–based

The web-design–based portfolio shifts the weight of the point values to the web-design portion of the rubric at 30%. It is assumed that the web-based development and design of the portfolio will be of higher quality than the previous two rubrics emphases. The inclusion of writing samples (copywriting, newswriting, or audio/video scripts), mediated content (included as active web-content), and the overall impact of the portfolio are each set at 20% impact and, as with the other two, 10% impact for the creation of a montage of content as a demonstration of web-design ability.

For a web-design-based portfolio, a montage would be a slideshow, mediated trailer, or similar element, seen near the top of the page, that would provide a first page-load quick look at the contents of the rest of the web-based portfolio. Again, the method likely will vary, it is the effectiveness that will be evaluated.

Overall rubric design

Regardless of emphasis, the rubric works on a six-level scale for each of the following content areas: writing form, writing creativity, mediated content form, mediated content creativity, web-based form, web-based creativity, montage form, montage creativity, overall form, and overall creativity.

Rubric Evaluative Levels/Scoring

Level 1 work indicates a critical lack of demonstrable ability in the respective area, a clear failure in the application of the fundamentals. Levels 2 to 4 represent severe to minor lack of demonstrable ability in an area, a noticeable deficit in form or quality that renders the work unacceptable when compared with a professional standard. Level 5 represents work that is as good as expected for a graduating university senior, work that shows distinct promise but is not quite of professional quality. Level 6 represents work that is comparable with what is seen in and on commercial media on a daily basis by American audiences, and is beyond the level expected from a graduating university senior.

The content of the rubric, and the grading scale, is based on an in-depth knowledge of the department’s program and capstone course expectations as well as a working knowledge of the program-level student learning outcomes expressed by the syllabus for the capstone course. This rubric is meant to serve as a guide to students, and instructors, but as a holistic rubric, will rely on instructor commentary to fully flesh out the specifics of each decision made.

Table

Table 1. Mediated Capstone Digital Portfolio Evaluation Rubric.

Table 1. Mediated Capstone Digital Portfolio Evaluation Rubric.

Table

Table

Table

Table

Table

Table

Table

Table

Table

Table

Given its nature as a capstone final project rubric, it is expected that the majority of evaluative criticism of individual elements, for the purpose of development of technique and correct use, has already occurred during the courses taken prior to arriving at this stage of the student’s education. Likewise, while it is expected that additional comments to the student will accompany the rubric after the portfolio has been evaluated, it is anticipated that, at this stage, the student will have a satisfactory grasp of the basics of each evaluated area of performance.

While an instructor must assess various performance-style works based on his own knowledge and understanding of the medium(s) to be evaluated, a working rubric can aid in a more objective assessment of media student capstone portfolios, using methods suggested in the academic areas of art, music, creative writing, theater, and others. Recognizing that a true objectivity, such as that found in mathematics and the sciences, will never be attainable in a performance-based medium, this rubric creates a structure within which faculty can establish benchmarks for quality and content for each of the listed areas and communicate those expectations to students by sharing the rubric in advance of the portfolio being submitted. It is hoped that this rubric will provide a foundation for media faculty who are responsible for assessing creative and performance-based work, either at the individual course or program level, to do so on a more objective basis.

This rubric fulfills the needs of the capstone course in the department and program used as a model in this research.It is not, however, a comprehensive rubric for all of media assessment. Portfolio style and content are expected to be somewhat different for other programs, and may focus more or less on evaluative areas than are suggested in this rubric. Each individual section of the rubric presented here also contain evaluative statements commensurate with the expectations of quality for students in one specific program, and will likely need to be altered to fit any other program that seeks to use this rubric in their own assessment activities. While it could be used as a model for other forms of electronic media evaluation, it is not appropriate as-is for related areas such as video game design, mobile application design, or online-based programming other than web programming.

Likewise, this rubric relies on an innate understanding of what constitutes professional standards for a variety of media industries, but the standard itself is not defined. As such, the rubric suggested herein carries with it an inherent subjective evaluative criteria that will likely result in a notable variance of scoring across various curriculum if using the proposed schema.

Finally, as a holistic rubric, this form of assessment assumes a certain level of education and ability on the part of the student and, therefore, is not suitable as-written for providing in-depth feedback on specific elements within each evaluative area. The form of the rubric could be used as a model for more detailed rubrics to be applied to specific areas (newswriting, audio production for commercial music-based radio, video production for television news, web-design specifically for non-commercial broadcasters, etc.), but is much too general to be applied in its current form to such precise evaluations.

The next step in this area of research would be to investigate the use of rubrics in exclusively media-related programs and conduct a comparison of the criteria, evaluative measures, and processes each program utilizes for their assessment, either in an individual course or at a program level. From this, a more widely applicable performance-based media assessment rubric might be created.

Second, the aforementioned rubric is for general use in evaluating capstone projects in a media program. It does not address the minute technical and creative elements necessary to evaluate work in audio, video, writing, or web-design and development. Development of a rubric that brings to light the various levels of each specific criteria for an individual media type could then be used to further refine the general rubric, providing aid for an instructor attempting to use it who might be lacking experience with one or more media types.

Finally, the subject of professional standards is suggested herein as an intuitive measure relying on the knowledge and observations of the person scoring the rubric, but is not detailed in any way. A study could be done, perhaps utilizing in-depth interviews with professional from a variety of media fields, to create a more detailed and specific guide to what the concept of professional standards entails. This could then provide a more objective basis from which to base evaluations using this, or similar, rubrics.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

Castro, E., Hyslop, B. (2013). HTML5 and CSS3: Visual Quickstart Guide (8th ed.) Berkeley, CA: Peachpit Press.
Google Scholar
Christ, W. (2014). So what is a model rubric? Journal of Media Education, 5(3), 5-9.
Google Scholar
Christ, W., Broyles, S. (2008). Graduate education at AEJMC schools: A benchmark study. Journalism & Mass Communication Educator, 62, 376-401.
Google Scholar | SAGE Journals
Ciorba, C., Smith, N. (2009). Measurement of instrumental and vocal undergraduate performance juries using a multidimensional assessment rubric. Journal of Research in Music Education, 57(5), 5-15.
Google Scholar | SAGE Journals | ISI
Cope, B., Kalantzis, M., McCarthey, S., Vojak, C., Kline, S. (2011). Technology-mediated writing assessments: Principles and processes. Computers and Compositions, 28, 79-96.
Google Scholar | Crossref
Daniels, E. (2010). Using a targeted rubric to deepen direct assessment of college students’ abilities to evaluate the credibility of sources. College & Undergraduate Libraries, 17, 31-43.
Google Scholar | Crossref
Evans, S., Daniel, T., Mikovch, A., Metze, L., Norman, A. (2006). The use of technology in portfolio assessment of education candidates. Journal of Technology and Teacher Education, 14(1), 5-27.
Google Scholar
Flowers, C., Hancock, D. (2003). An interview protocol and scoring rubric for evaluating teacher performance. Assessment in Education, 10, 161-168.
Google Scholar | Crossref
Gale, R., Bond, L. (2007). Assessing the art of craft. The Journal of General Education, 56, 126-148.
Google Scholar | Crossref
Hafner, J., Hafner, P. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25, 1509-1528.
Google Scholar | Crossref | ISI
Keding, A. (1988). Teach concepts before teaching ad copywriting. Journalism & Mass Communication Educator, 43, 105-107.
Google Scholar | SAGE Journals
Krug, S. (2009). Rocket Surgery Made Easy. Berkeley, CA: Pearson Education (US)
Google Scholar
Lau, K. (2011). The difficulties of assessing design students’ creativity: A critical review on various approaches for design education. Journal of Design Research, 9, 203-219.
Google Scholar | Crossref
Madeja, S. (2004). Alternative assessment strategies for schools. Arts Education Policy Review, 105(5), 3-13.
Google Scholar
Mansilla, V., Duraisingh, E., Wolfe, C., Haynes, C. (2009). Targeted assessment rubric: An empirically grounded rubric for interdisciplinary writing. The Journal of Higher Education, 80, 334-353.
Google Scholar | Crossref | ISI
McLaren, S. (2012). Assessment is for learning: Supporting feedback. International Journal of Technology and Design Education, 22, 227-245.
Google Scholar | Crossref | ISI
Morgan, B. (1999). Portfolios in a preservice teacher field-based program: Evolution of a rubric for performance assessment. Education, 119, 416-427.
Google Scholar
Orlik, P. (2010). Broadcast/broadband copyrighting (8th ed.) Boston, MA: Allyn & Bacon.
Google Scholar
Palmquist, B. (1997). Scoring rubric for interview assessment. The Physics Teacher, 35, 88-89.
Google Scholar | Crossref
Popham, W. (1997, October). What’s wrong-and what’s right-with rubrics. Educational Leadership, 55(2), 72-75.
Google Scholar | ISI
Reddy, Y., Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35, 435-448.
Google Scholar | Crossref | ISI
Saunders, T., Holahan, J. (1997). Criteria-specific rating scales in the evaluation of high school instrumental performance. Journal of Research in Music Education, 45, 259-272.
Google Scholar | SAGE Journals | ISI
Soep, E. (2005). Critique: Where art meets assessment. Phi Delta Kappan, 87, 38-63.
Google Scholar | SAGE Journals | ISI
Spence, L. (2010). Discerning writing assessment: Insights into an analytical rubric. Language Arts, 87, 337-352.
Google Scholar
Tierney, J. (2013, January). Why teachers secretly hate grading papers. The Atlantic. Retrieved from http://www.theatlantic.com/national/archive/2013/01/why-teachers-secretly-hate-grading-papers/266931/
Google Scholar
Tractenberg, R., Umans, J., McCarter, R. (2010). A mastery rubric: Guiding curriculum design, admissions and development of course objectives. Assessment & Evaluations in Higher Education, 35(1), 17-35.
Google Scholar | ISI

Author Biography

Jeffrey S. Smith is an assistant professor at Central Michigan University’s School of Broadcast & Cinematic Arts, focusing on assessment, web-design, social media, computer-mediated communication, and team building in virtual environments.

Article available in: