As technology continues to become more ubiquitous and touches almost every aspect of the composing process, students and teachers are faced with new means to make writing a multimodal experience. This article embraces the emerging sector of wearable technology, presenting wearable writing strategies that would reimagine composition pedagogy. Specifically, the article introduces Google Glass and explores its affordances in reframing student peer-review activities. To do so, the author presents a brief overview of wearables and writing technology, a case study of how the author deployed Google Glass in a first-year writing course, and a set of tips for using wearable technology in general and technical writing courses.

Writing in the 21st century is multimodal, social, and thus inherently challenging. Yancey (2009) in a report to the National Council of Teachers of English (NCTE), where she has served as president, calls composition instructors and literacy enablers to respond to the challenges facing students today by inventing new models of writing:

Historically, like today, we compose on all the available materials. Whether those materials are rocks or computer screens, composing is a material as well as social practice; composing is situated within and informed by specific kinds of materials as well as by its location in community. We have simply never seen it quite so clearly as we do now. (p. 8)

Following the innovation and proliferation of personal computers and smart devices, writing is transformed by new available means of expression and information consumption. As the NCTE Conference on College Composition and Communication Position Statement on Teaching, Learning, and Assessing Writing in Digital Environment contends, “The focus of writing instruction is expanding: the curriculum of composition is widening to include not one but two literacies: a literacy of print and a literacy of the screen” (NCTE, 2004). Opportunities for reimagining the composing process abound—from intelligent writing devices to cloud composing to personalizable wearable communication technology—composition specialists are motivated to be responsive to new trends and demands of writing by embracing new technology in the writing classroom.

The fields of composition and technical and professional communication have increasingly emphasized the importance of being well versed in new composing strategies as well as technological ideologies. Selfe and Selfe (1994) call teachers to pay attention to the rhetorics of technology and help students learn to be critical technology users. Critical pedagogues like Paulo Freire, bell hook, and Henry Giroux argue that we are better users when we are thinking critically about the nature and impacts of that technology. Accordingly, the writing classroom serves as an ideal technological contact zone, where students could meet, clash, and grapple with new inventions and learn to be smart users of technology. This is why there is no better time than now to introduce wearable technology—while it is still in its infancy—into writing instruction.

As wearable devices such as iPod, Muse, Fitbit, GoPro, Google Glass, Oculus Rift, and Apple Watch become more commonplace in our society and students’ lives, writing instructors should seize the opportunity to examine such innovation and use its affordances to challenge the conventional modalities of writing. Market research firm SSI states that wearable technologies will become popular in the mainstream market within the next two to three years (Kannon, 2013). Another recent survey conducted by Harris Interactive of 2,577 U.S. adults, about 46% said they would be interested in owning a wearable technology device (Shannon-Missal, 2013). These statistics show that wearable technology devices are being increasingly adopted in the private and corporate sectors. Currently, one of the biggest market shares for wearables is the health industry. With the proliferation of affordable consumer healthcare and fitness wearables—like Jawbone, Fitbit, Apple Watch, Embrace, Nike FuelBand, and Google Smart (contact) Lens—wearables are emerging as everyday necessity with the growing demand for pervasive computing and data management in personal health and wellness. Fitbit, which accounts for 50% of the 2.7 million wearable bands in the first quarter of 2014, shows industry observers that current dynamics reflect the rapidly growing nature of wearables (Alto, 2014).

And the education sector is not far behind. Universities around the world have started to experiment with the use of wearable devices for classroom instruction. University of Southern California journalism professor Robert Hernandez led a course in fall 2014 with an aim to create Glass-centric software for journalists (Gayomali, 2014; Thompson, 2014). He urges teachers and futurists to be proactive about teaching with wearable technology. According to PBS’s survey in early 2015, of the people who were interested in wearable technology, “relative advantage, compatibility, complexity or ease of use, trialability and observability did play a role in their potential use and adoption of such device for educational purposes” (Weiss, 2015). These instances demonstrate how wearables can be effective learning tools that empower students to creatively think about writing as well as to allow greater collaboration. And by deploying and integrating wearables in the writing classroom, we are subsequently encouraging and preparing students to be reflective users of emerging technology.

In this article, I introduce Google Glass as a wearable writing tool that could reimagine traditional writing pedagogy, particularly student peer reviews. To explore this possibility, I first provide some overviews of the development of wearable technology and peer reviews in writing instruction and then present a case study detailing how I deployed Google Glass in a first-year writing course. Finally, I offer a set of tips for teaching writing with wearable technology in general and technical writing courses.

I have employed participatory design as the methodology in this project. As Clay Spinuzzi argues, for participatory design, “design is research,” and it involves heavy direct interactions between researchers and their participants (Spinuzzi, 2005, p. 171). To encourage participatory research, researcher-designers come to conclusions in conjunction with their participants. In this study, I have included my students’ interpretations as simultaneous accounts parallel to my experience as the researcher, as outlined in the Discussion section of this article. There were three stages in this study:

Stage 1: Initial Exploration

I introduced the concept of wearables-augmented peer review to students on our first meeting and held a device orientation workshop with them. Students raised questions about the viability and potential challenges in this study.

Stage 2: Discovery

The students and I decided collaboratively on the procedures and evaluative criteria for their participation in this study as part of the course. We discussed how this exercise might benefit the students’ learning in the writing course, clarified desired learning outcomes, and envisioned the contributions this study might make to the future of writing and technical communication.

Stage 3: Workshop (Prototyping)

We conducted demo sessions and devised a working procedure as prototype for a plausible wearables-supported peer review. We held a total of four Google Glass-supported peer reviews during the semester with the first two being experimental and the last two being more focused on tasks evaluation.

The data collected in this study include my students’ reflections—narratives and exit survey responses—as well as my personal observations as the researcher and observer. These findings are reported in the latter sections of this article. In the next two sections, I provide a comprehensive review of the historical developments in wearable technology and student peer review to position this study as a bridge between the gap in emerging technology and writing pedagogy.

Taken as a whole, technology is an extension of humanity. Every technological advancement should seek to replicate or amplify our bodily or mental abilities—helping us accomplish tasks that we would not otherwise capable of accomplishing within our normal capacity (Brey, 2000). Wearable technology, like any technological innovation, is seemingly the next step in consumer electronics designed to help us reconfigure daily operations to increase productivity, enhance communication, and heighten user satisfaction in everyday activities. What makes wearable technology—more fondly called wearables—distinctive from other modern technology is its attempt in blurring the separation between computers and everyday garments such as clothing, accessories, and singular automatons such as watches and headphones. Wearables are different from mobile gadgets such as smartphones and laptops. For the purpose of this essay, I identify wearables as hybrid, network-enabled devices that can be worn on or in the body, that are seamlessly integrated with a user’s everyday life and movements. Wearables are not just a pair of fancy earphones or digital watches; wearables take advantage of the available means of connectivity and the burgeoning interests in the “quantified self” (Baumeister, 2012) to connect the user as a person to their goals such as staying fit, getting information, or staying organized by supplying, extracting, aggregating, and representing data that help users achieve their goals.

A Brief History of Wearables

While wearables continue to receive enormous attention from popular media and early adopters, the technology is not as relatively new as it is claimed to be. Tracing back to as early as the 1600s, during the Chinese Qing Dynasty, a fully functional abacus was designed to fit on a ring to be worn on the finger, allowing bean counters to perform mathematical (computing) tasks on the go. Later in the 1800s, watchmaker Abraham-Louis Breguet had reportedly created the first wearable timepiece for the Queen of Naples. Debates go on regarding the question of whether a watch is a computer in the sense that it only computes time but not in general purposes as we think a computer would do in today’s terms (Thompson, 2014). Nonetheless, the industry agrees on the invention of the first truly wearable computer in the 1960s, when mathematicians Edward O. Thorp and Claude Shannon built some computerized timing devices to help them cheat at the game of roulette. Thorp and Shannon concealed a timing device in a shoe that could quite accurately predict where the ball would land on a roulette table. Similar devices were developed later and were proved so successful that the state of Nevada—where casino central Las Vegas resides—passed a law prohibiting them in 1985 (Thorp, 1998).

The 80s was when wearables steadily emerged as groundbreaking. Steve Mann, a researcher and inventor specializing in electronic photography, wanted to create a machine that would record what the user saw through their right eye, while allowing them to see through it at the same time (Buchanan, 2013). He called it the EyeTap project. Mann’s project has gradually whittled it down to a sleek headset that resembles the modern Google Glass. In the mid-80s, Mark Schulze, a mountain bike fanatic, created the first helmet camera by rigging a video camera to a portable video recorder (Coldwell, 2012). Awkward and heavy, Schulze’s helmet cam was the forefather of today’s GoPro. Toward the turn of the century, Reflection Technology, Inc. created the Private Eye, a head-mounted display that used a vibrating mirror to create a display directly in the wearer’s field of vision (Lewis, 1988). Inventor Doug Platt adapted the display to a disk operating system-based hip-mounted computer, creating at once what we would truly consider a wearable computer. Students at Columbia University used the Private Eye to build an augmented reality repair system for laser printers, even though the research projects did not quite leave the research institutions (Webster, Feiner, MacIntyre, Krueger, & Massie, 2001). In 1994, Edgar Matias and Mike Ruicci of the University of Toronto debuted a wrist computer (Rhodes, n.d.). Their system presented an alternative approach to the emerging head-up display plus chord keyboard wearable. With the keyboard and display modules strapped to the operator’s forearms, text could be entered by bringing the wrists together and typing. Later in 1996, the Defense Advanced Research Project Agency hosted its forward-thinking “Wearables in 2005” workshop, bringing together industrial, university, and military visionaries to work on the common theme of delivering computing to individual users (Rhodes & Mase, 2006).

Entering the new millennium, as part of Kevin Warwick’s Project Cyborg (Warwick, 2000), Warwick’s wife, Irena, wore a necklace that was electronically linked to her nervous system via an implanted electrode array. The color of the necklace changed between red and blue dependent on the signals on Warwick’s nervous system. Two years later, Nick Woodman, inspired by his surfing trip in Australia, invented GoPro cameras to allow consumers to take photos and record videos in extreme adventures. Following the launch of the iPhone, Apple collaborated with Nike in 2006 to launch Nike+ iPod Sports Kit, an activity tracker device that measures and records the distance and pace of a walk or run. Fitbit, launched in 2009, worked in a similar manner, attaching to the user’s belt and measuring steps taken with an accelerometer. In 2012, smartwatches were kicked off with Pebble’s successful Kickstarter campaign (Etherington, 2015), creating an integrated device that paved the way for the latest Android and Apple’s smartwatches. Closing up to 2014, virtual reality head-mounted gaming console Oculus Rift and natural language-activated computer Google Glass, respectively, released their explorer version of prototypes to a selected public. Google Glass is a wearable computer with an optical head-mounted display. Google Glass displays information in a smartphone-like hands-free format, which can communicate with the Internet via natural language voice commands. Besides its touchpad, Google Glass also works via voice command, like, “OK Glass, get directions to …,” “send a message to …,” “record a video …,” and so forth (Figure 1).


                        figure

Figure 1. Google Glass Explorer Edition (XE) prototypes, with and without frames, priced at $1,500 each.

While Google announced on January 15, 2015, that they would stop producing the Glass prototype and retire the project into the next development, Microsoft announces their latest prototype, Windows HoloLens, another smart glasses unit that runs on Windows 10 and Windows Holographic. HoloLens uses sensors, an optical head-mounted display, and spatial sound to allow users to interact with augmented reality applications.

Multimodal Composition and Writing at the Intersections of Wearables

Since the 1980s, teachers of writing have been increasingly introducing technology into their curricula. From bringing the personal computer into the classroom to shepherding students onto the World Wide Web, writing instructors have strived to make learning relevant and kept up with pace of technology. As computer and Internet technologies allow for a range of digital texts including multimedia writing that integrates visuals, sound, and other forms of interactivity, many writing classrooms are transformed into networked computer labs with personal computers, Internet connections, and devices with high-definition colors, audio, and video. Moreover, desktop publishing and networked learning are becoming integrated into process theories of writing, and digital technologies are coming to be seen as interfaces wherein writers compose, communicate, and construct knowledge among themselves and their audiences. These interfaces introduce semiotic channels that are beyond alphabetical means of composing. Within our field, we have seen multimodal composition scholarship that explores visual composition and visual literacy, multimodal discourse, digital multiliteracies, new media applications, graphic and aural design, and multimodal writing assessments (Ball & Arola, 2010; Kress & van Leeuwen, 2001; Selber, 2004; Selfe, 2009; Shipka, 2011; Wysocki, Johnson-Eilola, Selfe, & Sirc, 2004). These literatures mount a platform for emerging technology to enter the writing classroom and expand the bandwidth of literacy practices of which our profession had been focusing on since ever since its conception.

The next steps at the crossroads of wearables and writing instruction are where it gets even more exciting. As each new technology emerges, writers and technical communicators need to position themselves at the forefront of exploring the device from the perspective of the user. Especially in technical writing courses, we teach students to visualize and design content for multiple media, audiences, and contexts and to lead in the study of usability of emerging products and processes. Students in these courses are studying to become writers or communicators who work with complex material in an environment where information is digitized and produced using complex information management systems. Voice-activated search using wearable technology like Google Glass enables students to interact with text, image, voice, and video during invention—with a potential for reframing technical writing process pedagogies, digital literacies, and students’ future work as communicators. Such rationale further motivates my deployment of Google Glass in a writing course so to build students’ awareness and proficiency in using emerging technology to achieve their professional goals.

Motivated by Yancey’s call to invent new writing in response to the changing landscape of technology, I aim to expose students to cutting edge wearables that will soon become prevalent in their lives. To do so, I focus on one aspect of the writing process that often gets engrossed with conventional modalities—student peer reviews. Years of teaching experience inform me that student peer review is a pedagogical arrangement in the classroom that comes with various forms and has broad application. With the growing focus on collaborative learning, the concept of student peer review has gained increased attention in higher education in recent years. And for over four decades, the technology involved in student peer reviews has evolved alongside popular and instructional technologies. As a writing instructor and technology enthusiast, I see an opportunity to introduce wearables as an emerging composition technology and evaluate its effective via the student peer review process.

With a futuristic vision, my goal is to reimagine peer-review activities in the writing classroom with an eye toward the critical application of wearables in the writing process. Before jumping to my excited narrative on the deployment of a prominent wearable in my classroom, I would like to revisit the pedagogical requirements and development of peer review in writing pedagogy. In the following section, I chart the traditional review processes leading to the integration of newer technologies in student peer reviews and a case for implementing wearables in such essential part of the writing process.

For decades, student peer review has been a signature exercise in many composition and technical communication classrooms. Composition specialists dated back to the early 70s, including Kenneth Bruffee (1973), Hawkins (1976), and Elbow (1973, 1981), have explained and defended peer reviews as a way to encourage students to not only produce writing but also learn how to evaluate them as real audiences. Studies indicate that readers benefit from reading the texts of their peers and critically thinking about the text in order to offer suggestions for revision, while writers benefit from their peers’ suggestions for revision (Bruffee, 1993; de Guerrero & Villamil, 2000; Ferris, 2003; Tsui & Ng, 2000; Villamil & de Guerrero, 1996). More importantly, Dana Ferris points out that peer feedback offers the opportunity for more feedback than the instructor could possibly provide alone (Ferris, 2003, p. 251). Recent studies have confirmed the reliability of student peer reviews, giving instructors more confidence in using peer reviews as complementary critique to the instructor’s evaluation (Cho, Schunn, & Wilson, 2006).

However, peer reviews are not perfect without limitations. Jane Stanley (1992, p. 291) argues that students often experience difficulty in assessing their roles in the peer-review process, making it challenging for them to elicit information from their peers. Additionally, Amy Tsui and Maria Ng note that peer comments do not necessarily enable macro-text-based changes, or content-based revisions (Tsui & Ng, 2000, p. 167), because students tend to view their role as error detectors in peer reviews. These studies suggest that in order for peer reviews to be effective and serve the purpose of improving student writing, students need higher engagement as critical readers, not just surface error spotters, in the peer-review process. To achieve this goal, many teachers have turned to computers and digital feedback for salvation—moving peer reviews from the conventional means of reading and writing, that is, papers and pens, to the more collaborative digital spaces. Using newer technologies such as mobile computers and online learning management systems (LMS), many instructors from the fields of computers and composition, technical communication, and related disciplines have digitized the peer-review process. For instance, Dean (2009) has surveyed online student writings and reports that digital peer reviews may afford better reading of a text and composing of thoughtful comments than traditional peer review. Evidently, digital peer reviews are also essential to computer-mediated second-language instructions as they provide “a safer and a more relaxed environment for language learners” to review one another’s writing (Sayed, 2010).

Seemingly, digitalizing the peer review landscape does more than just improving student experience and perception for peer review. For larger classes, online tools also promise to significantly reduce onerous task of administrating peer review. Sophisticated online peer-review systems such as PRAZE (Peer Review from A to Z for Education) and CPR (Calibrated Peer Review) facilitate flexible management of student peer-review activities. PRAZE (Mulder & Pearce, 2007), created at the University of Melbourne, allows instructors to customize peer-review processes within a subject, and for students to review one another’s work anonymously. In principle, students are in control of the whole review process with PRAZE as they are given more autonomy in creating, sending, and receiving feedback on their work. CPR (Russell, 2004), trademarked by the University of California, differs from its contemporaries by emphasizing the training and calibration of reviewers in their skills of reviewing before providing feedback on any work. During the calibration stage, students evaluate example texts, also known as calibration essays, by answering questions about them. Students’ responses are compared to a reference set of correct answers, and the student is able to access these for purposes of feedback and comparison. Until the student attains a minimum standard of competence in evaluation, they are unable to graduate to reviewing the work of their peers. For instructors, this takes out of their hands the burdensome work of preparing students for reviewing their peers’ writing.

More recently, researchers at the University of South Florida (USF) and Michigan State University (MSU) have respectively developed online collaborative platforms to give better feedback to student writing. At MSU, the cloud-based Eli Review has been piloted since 2013 at MSU to facilitate the review and revision processes in writing based on criteria set by the instructor. At USF, MyReviewers have also received attention from instructors across rhetoric and composition courses, building a corpus of more than 88,800 student peer reviews and 99,500 instructor-graded projects. Off site, the corpus composed by non-USF users included 7,846 peer reviews and 1,040 instructor graded projects as of fall 2014 (MyReviewers, 2015). Other screen-based virtual peer reviews that are not elaborated here but still worth taking notes of are TOPIC at Texas Tech University (see Foreman, 2000) and SWoRD at the University of Pittsburgh (Cho & Schunn, 2007).

While the numbers and successes in the above-mentioned instances constitute an important argument for the implementation of computer-mediated peer reviews in the writing classroom, there are still constraints to such approaches. Martin Guardado and Ling Shi’s study has found that students missed the presence of the reviewer—as in face-to-face interactions—because they failed to use the dialogue capacity in computers during peer reviews (Guardado & Shi, 2007, p. 457). As for students who lack typing skills, digital peer review has proved to be challenging, as demonstrated in Jin and Zhu’s (2010) study of conducting peer review using instant messaging technology. Students who lack technological proficiency get disengaged, and the technology becomes a type of distraction in its ability to transform motives (p. 296). Having conducted various types of digital peer review in the writing classroom, I could testify to the challenges brought by these digital but rather still conventional modes of reviewing. Digital or not, many of the peer review exercises we conduct today remain faithful to having student review their peers’ writing using print or written comments, thus undermining the potential of a reviewer’s active presence in the feedback provided. To heighten student engagement, we need to break free from the inscribed comments and consider adding new dimensions to verbal feedback. Entering the era of wearables and ubiquitous computing, I maintain that we could integrate the new affordances of wearable technology into the peer-review process to redesign and enhance the modality of writing and reviewing. Using the point-of-view (POV) video capturing capacity in Google Glass, we could reimagine the approaches to facilitating student peer reviews such that the activity is no longer tied to just texts but instead encourages active involvement of both the reviewers and student authors. In the following sections, I offer a thick description of my uses and deployment of Google Glass in my spring 2015 first-year writing course, as well as a prototypical model for facilitating peer reviews using this wearable device.

I became interested in wearables when I bought an iPod Shuffle in 2009. As the technology improves and becomes more seamless in its integration with networked computers, I became intrigued by wearable’s potential to improve the learning and teaching experience. How could we utilize the affordances of Bluetooth headsets to make learning a better experience? What about real-time translation devices? Calculator watches? GPS-enabled fitness trackers? As I was still asking these questions, wearable computers entered mainstream market and became the next big thing for many industries, including higher education. In spring 2015, I was recruited into the University of Minnesota’s Wearables Research Team (WRT). WRT explores the use and impact of Google Glass on the teaching and learning of writing to provide students and faculty with an opportunity to envision and deploy future scenarios in writing pedagogy (see “Funding” for grant information). Under the directions of Ann Hill Duin, director of graduate studies, and Joe Moses, senior lecturer in the Department of Writing Studies—both official explorers in Google’s National Explorer Program—we focused initially on a set of six technical writing courses in which students are taught to visualize and design content for multiple media, audiences, and contexts and to lead in the study of usability of emerging products and processes. Since my involvement, the team expanded the use of Google Glass as modules across additional courses, including my University Writing course, a required first-year writing course for students at the University of Minnesota in Twin Cities.

As part of my research, I have deployed, workshopped, and experimented using Google Glass in my first-year writing class as a way to scrutinize the promises and perils of such wearable device. To the end of this motivation, I have developed a rationale for using Glass as part of the feedback mechanism in student peer reviews: Google Glass may afford a new dimension to our writing experience by augmenting the writing and revision processes. While a traditional peer review requires students to review the writings of other student writers’ and respond to prompts given by the instructors, such practice is driven mainly by the written texts, as evident in my review in the previous sections. Using Glass, students could track comments and editing suggestions through video recording. Reviewers may indicate places in the writing where revisions are recommended by snapping a picture or recording the suggestions in video. These images and videos could be sent to the respective writers after the review session. Such affordance adds value to the review process, one that enriches writer–reviewer exchange and collaboration.

Design and Implementation

In the past, I have had students work in pairs or groups of three during peer reviews. Students were asked to bring two hardcopies of their in-progress paper (the rough draft), and I would collect them at the start of the peer-review session, randomize and redistribute them to the groups, and have students read and respond to a set of prompts (consisting 8–10 questions) that I have prepared for the specific assignment. Students usually take about 15 minutes to read a paper and 20 minutes to complete the prompts. After that, at about 35 minutes into the hour, I will ask students to wrap up on the first paper and move on to reviewing the second paper. The same steps repeat. At the end of the 75-minute class session, students will return their feedback (responses on the prompts and comments written on the rough draft) to their respective authors. At that point, I usually encourage students to say a word or two to the draft’s author, pointing to one thing that the author should pay most attention to. Then, the peer review is concluded. Students pack their bags and we would meet again in the next class session.

With Google Glass, the focus of review is shifted from the written comments to the spoken critiques. Students would employ the think-aloud protocol when reviewing their peers’ drafts. As I foresee that it might take longer than 35 minutes for students to use a relatively new technology in their peer reviews, I have devised a review process that only require them to individually review one rough draft per session. Students were asked to bring in only one hardcopy of their rough drafts, which I would collect, randomize, and redistribute to a different student. I also hand out prompts (the reviewer form) for the specific assignment. I would only assign four to five prompts for this version of the peer review because I would like students to focus on verbalizing their thoughts rather than writing answers on the reviewer form. The following instructions were given to the students during a dry run of using Google Glass to review a paper. The same steps were followed during the actual peer-review session:

  1. Get a pair of Google Glass from your instructor.

  2. Put on Glass and start recording a video. See attached tutorials for ways to extend a video recording.

  3. Make sure your video recording is captured at the eye/reading level. Adjust lens as necessary.

  4. Start by announcing your name and the author’s name, something like: “I am Jason, and I am now reviewing Justin’s first major writing assignment.”

  5. Then, continue by thinking aloud as you review the paper. You don’t have to talk all the way through the paper but remember to verbalize your thoughts on different parts of the paper as guided by the peer-review prompts provided on the reviewer form.

  6. Fill out the reviewer form as you go. Don’t keep it until you have finished recording to fill them out.

  7. Once you are done reviewing a paper, announce that you have completed that review process: “That’s all I have for you, Justin. I have finished reviewing your essay.”

  8. Stop the video recording. Be careful not to delete the recorded video by accident. You may review your recording by using the playback option.

  9. Turn Glass off and return it to your instructor. Your instructor will let you know when you should return the reviewer form to its corresponding student author. Don’t return it until you are told to do so, as other students might still be recording their reviews (Appendix).

Having conducted the Google Glass peer review several times over the semester, I observed that students need around 45 minutes to completely review one rough draft depending on the length of the paper. At the end of the review, students would return the review forms and rough drafts reviewed to their respective authors. And since there would be more time left on the clock (in a 75-minute session), students are asked to share their thoughts with their peers, resulting usually in a 10-minute face-to-face conversation before I would dismiss them from class.

Logistics and Facilitation

Unlike the traditional peer review, where the instructor only needs to prepare the set of prompts to be used in the review prior to the class session, there are a lot more logistical concerns and facilitation behind the scene in a Google Glass peer-review session.

  • Device management. As of the writing of this essay, WRT has complete ownership over all 20 pairs of Google Glass devices we received from the grant. This means that we are responsible for its storage and maintenance, including the flow of devices from one researcher to another during the grant period. We keep a log of dates when deployment would happen and who would be responsible for collecting and returning the devices. Since the battery life of Google Glass is not ideal—as I discuss in the later section on challenges—I had always taken the devices home so I could fully charge them up for the peer-review session on the next day.

  • Orientation and training. Beyond the initial introduction to Google Glass as an emerging wearable technology and how we could use it to enhance writing, I have set aside a class session to allow students to test out the features on Google Glass and to get themselves familiarized with the steps involved in video recording and saving. As with most emerging technology, there lacks helpful resources to help students understand the interface and operating system of Google Glass. As such, I have illustrated a diagram (see Figure 2) to help students visualize the relationships between the various locations on the Google Glass interface as well as the directions and gestural commands to navigate on the interface.

  • File management. After each peer review session, I would transfer all of the video footages that students have recorded to my computer and edit them as necessary (cutting out the frames before the reviewers announced their names and frames after they have completed the review). I would name these video files by the reviewer and author’s names and upload them to a Google Drive folder shared by the class. It usually takes about 30 minutes to upload all the video files depending on the speed of the Internet. Once uploaded, a blanket e-mail would be sent out to the class. Students were asked to review these video recordings along with the written comments they received on the reviewer form and markings on the rough draft as they revise their papers.


                        figure

Figure 2. A mapping of Google Glass’s operating system interface and gestural commands.

Responses From Students

Minus the hype factor, students have reported that they found Google Glass and video-based feedback helpful in providing more comments than a written review would. The following narratives showcase some students’ reactions with using Google Glass in their peer reviews:

  • “I think the activity done in class was successful in the fact that I could see exactly what [the reviewer] was thinking as he went through my paper; I received a lot more feedback than I normally would have through strictly the written form from peer review because of the Google Glass activity. One downfall of the peer review was that I felt that it was difficult to stay concentrated when I was giving my feedback while everyone around me was talking. I feel like I could have given much stronger feedback if I had read the paper where I wasn’t distracted by everyone else’s thought process.”

  • “I thought the exercise was beneficial because it allowed us to recognize when the reader thought or noticed something important. At the same time, it was uncomfortable because I wasn’t quite sure if I was doing everything correctly so at times was hesitant. I think this will change with more usages and reviews but that was my initial feelings! Overall I like using the Google glass and think they are a slick tool!”

  • “My thoughts on the exercise were that it was hard to review it and talk at the same time and try to record. Also, my glasses I don’t think were always pointing to the paper so it might be confusing for the person to understand where I was in the paper. But overall I think it will become easier to figure out and do now that we have done it once.”

Survey Results

At the end of the semester, six of the students (N = 6) volunteered to complete a survey questionnaire to report on their experience with using Google Glass during peer reviews. While I recognize that the frequency counts in the results do not completely capture all students’ reactions toward Google Glass, the data serve as launching point for future quantitative study and a framework for collecting student perception in using wearables for peer review.

Screening question

Have you done any written peer reviews in a writing/English course before this semester?

Yes – 4

No – 2

Ratings of experience and perception:

For the following questions, check the box that best describes your experience.

Table

Table

Evidently, there are mixed emotions in the students. A majority of the students in this survey said that they could express themselves better when reviewing their peers’ papers. They also reported that the peer-review videos were helpful for revising their own papers. Overall, students reported that they liked how articulating their comments helped them stay on task, while the authors appreciated the rich comments they received from their peers although most of them found the device to be uncomfortable to use. As the instructor, I have the luxury of a bird-eye observation of the whole peer-review process, and I have noticed some immensely valuable properties that Google Glass affords for the peer-review process, which I discuss in the next section.

In his recent column in The Chronicle of Higher Education, technology reporter Steve Kolowich explores the uses of videos in the writing classroom by asking this question: could video feedback replace the red pen? He reports that the Australian instructors who skip the written comments and employ unscripted video critique for student writing find that video feedback offers a similar intimacy as in-person or written feedback “in a less-ephemeral way” (Kolowich, 2015). These instructors, Michael Henderson and Michael Phillips, write that students found video critiques to be “real,” “honest,” and “authentic” (2015, p. 59). In fact, Henderson and Phillips were not the first to embrace this flipped-classroom pedagogy model. Short-form, do-it-yourself videos are created by instructors from around the world to shake up the brick-and-mortar classroom and conventional instructional design. Screencasting software such as Jing, QuickTime, and CamStudio allow readers to record all or portion of their computer screen and the ability to include audio support in their recording. Readers or reviewers can offer feedback to writing by scrolling, highlighting, and annotating on the texts while providing verbal critiques for the author. Authors can review the responses by playing back the video recordings either with an installed video player or a browser-based application such as Screencast.com.

In “Talking with Students through Screencasting: Experimentations with Video Feedback to Improve Student Learning,” Thompson and Lee (2012) argue that screencasting technologies present fresh approaches to student engagement and the revision process, providing more in-depth explanatory feedback for the writer than traditional written feedback. They conclude, after experimenting with video feedback, or “veedback,” for student writing, screencasting engages multiple learning styles while providing students with deeper explanatory evaluation (Thompson & Lee, 2012).

A wearables-supported peer review, driven by video-based response, seems to be a natural fit and reasonable experiment next in writing instruction. More importantly, from my observations, the first-person POV video feedback allows authors to see the reading of their article through the reviewer’s subjective angle (see Figure 4), thus taking into considerations the reading action of the reviewer and get a closer sense of what the reviewer is going through. Such psychological capacity and embodiment experience have never been made possible by written comments. Advancing from existing scholarship on the instructor’s use of “veedback” in student work, I explore in the next sections the affordances of POV video feedback in student peer review.

Presence or Asynchronous Personable Feedback

In a thorough reassessment of peer-review activities in virtual environment, Breuch (2004) explains in her book, Virtual Peer Review: Teaching and Learning about Writing in Online Environments, that the use of computer technology remediates the exchanges among writers in responding to one another’s writing. While Breuch emphasizes that virtual peer review is a remediation of face-to-face peer review, such activity accentuates written comments over oral feedback, resulting in an “abnormal discourse” in peer-review theory. While there has not been comparative studies that provide a comprehensive evaluation between the two modes of peer review—face-to-face versus written feedback—wearables like Google Glass affords an intermediary to reconcile the two. As students employ a think-aloud critiquing method in the review process while still providing written comments on the author’s paper and reviewer form, student authors are able to see not just what their peer’s comments were but how the comments were made. This adds values to the reviews produced for writers as they are no longer just fleshless comments but rather embodied feedback that takes the reviewer’s recorded presence into considerations.

When examining the video footages that students have produced during the Google Glass-supported peer reviews, I can’t help but realize how human those reviews are; the comments were presented in their most genuine form: the feedback was unscripted and unmodified; the reviewers rephrased a sentence or two to try to get their points across; jokes were made; and the feeling of the reviewer’s presence in the feedback was strong, albeit asynchronous to the authors.

This observation is also aligned with Ned Kock’s media naturalness hypothesis (2001), a theory of communication media developed based on Darwinian evolutionary principles, wherein a decrease in the degree of naturalness of a communication medium— its degree of similarity to a face-to-face medium—an increased cognitive effort is in effect. According to this theory, humans are built for face-to-face communication; as such, “the use of discrete sounds (which later developed into complex speech) and visual cues, has been the predominant mode of communication” used by us over the course of humanity (Kock, 2001, pp. 11–12). Therefore, it is equitable that paraverbal and gestural cues are worked into students’ recorded feedback when using Google Glass. In fact, the hands-free, POV-recording feature on Glass seems to further motivate such personal and intimate input during student peer reviews.

This new dimension of presence strengthens the cognitive relationship between the author and the review and may allow the authors to evaluate the feedback given by their peers in the context of time and space where the reviewer made the comments. Paired with the written comments prompted by the questions the instructor has set on the reviewer form, POV video-based feedback could arguably provide a much richer critique than purely written feedback and other screen-mediated peer reviews (Figure 3).


                        figure

Figure 3. Students using Google Glass during a peer-review session.


                        figure

Figure 4. Screencast video feedback (from Thompson & Lee, 2012; left) versus Google Glass POV video feedback (right). POV = point-of-view.

Verbal and Visual Conduits of Reviewing

In his column, Kolowich highlights that video feedback has some drawbacks as well: “Students said it can be more difficult to match specific parts of a video to relevant passages in their papers; printed notes in the papers’ margins might have made the connection easier” (Kolowich, 2015). Such problem, however, can be potentially overcome if the reader’s reading paths are made explicit in the feedback. Since the past decade, various experimentations have been conducted to “track the mind’s eye” during reading (Anson & Schwegler, 2012) and explore semantic composition by monitoring eye movements (Pickering, Frisson, McElree, & Traxler, 2004). As lab technologies become more available to researchers, the use of eye-tracking technology along with semiotic analysis (Holsanova, Rahm, & Holmqvist, 2006) and even keystroke logging (Leijten & Van Waes, 2013) are emerging methods for testing, identifying, and recording writing and reading patterns to allow for analyses and visualizations of these processes. While these methods would provide the observer a good sense of a reader’s reading conduit, they are limited by two major shortcomings. First, most of these experimentations are only possible in a controlled lab environment. Eye-tracking devices and keystroke logs are tethered to a testing facility where the devices are housed—and a professional lab assistant’s help is necessary for collecting and preparing data for review and analysis purposes. Second, current eye-tracking devices are mounted to the computers where the reading takes place. This limits the medium for which the reader could review a text—i.e., computers only—and unlike wearables, it immobilizes the reader by having him or her read at the eye-tracking setup station. As a result, the behaviors of the participants during a test are less likely to simulate a real-life reading experience. In their eye-movement study, Paulson, Alexander, and Armstrong (2007) also admitted that eye-tracking methods have physiological limitations where fixations (a series of very short pauses during reading) are difficult to be identified, and I would add, the meanings of pauses are less likely to be elucidated.

With Google Glass, students were able to assume their regular reading practice during peer reviews—though a few students were more conscious about recording a video while reading at first; however, most of them forgot that they were recording as time passes and they seemed comfortable during the whole peer-review session. The POV recording of student feedback offers a subjective perspective taken from the reviewer’s reading angle compared with a regular screencast, which only mirrors the reviewer’s reading. When reviewing the video feedback that students have made for their peers, I found that the reader’s reading conduit was made explicit through the POV recording— as though the author was able to read through the reader’s eyes (Figure 4).

As expected, one key theme that has emerged from the students’ narratives of experience was that they are able to see their reviewer’s thoughts, as one student puts it aptly: “I could see exactly what [the reviewer] was thinking as he went through my paper” (emphasis added). What the student was able to see there was not the actual thought of course but rather cues that signify her reviewer’s understanding of her paper. This suggests that when revising their drafts, students could take into considerations not just the verbal—written or spoken—comments (words the reviewer used) but also the paraverbal (how the reviewer said those words, focusing on the tone, pitch, and pace of voices; linguistic fillers, such as “um” and “ah”) and nonverbal components (gestures, pauses, and head movements) that are captured through the POV subjective camera angle. These communicative components add up to provide a more holistic feedback that is otherwise unattainable through plain written critiques. For instance, Breanna was able to tell at which part of her paper her reviewer found it hard to comprehend because the POV recording shows her where her reviewer has paused and reread a few times—with several fillers like a long “um …” and “oh!” – before making a recommendation for Breanna to consider a different word choice in a sentence.

Furthermore, in the videos I have collected, the reviewer’s hands are almost always captured in the recording. This presents an interesting layer to the reviewing process as the reviewers were pointing to specific words and sections in the paper that subsequently creates a reading path that is made visible to the author (Figure 5).


                        figure

Figure 5. A student visualizing his reading paths with gestures.

For instance, when Ryan was reviewing a paper and came across a jargon that he was not familiar with, he skimmed the paper (with his finger running through the paragraphs) to look for a definition for the term. He later found that description came too late in the second page and so he made a recommendation to Brandon that he could insert a brief exposition in the third paragraph (pointing to an approximate location where the definition could go). Although many reviewers already do that kind of visual recommendations using arrows and drawings in a written peer review, the POV video affordance in wearable writing makes visual demonstrations even more realistic and accurate. Also, an author might be able to tell how well he or she has organized the paper by looking at how often the reviewer has to go back and forth in reading —all captured in the POV video recording.

Embodiment and Emotionality in Writing Response

Aside from the sense of the reviewer’s presence and visibility of the reading process, POV video responses are also able to capture the reviewer’s emotion in peer reviews. Emotionality is an additional dimension afforded by the POV recording that draws attention to embodiment and affect in multimodal composing. The overarching contention here is that wearable peer review may contribute to a beginning theorization of the embodied and felt experience of reviewing with wearable technology. Through a POV video feedback, an author might be able to tell when and how a reviewer become excited, or disengaged, judging from the varied tone and speed of reading (higher or lower pitch and faster or slower reading) or when the reviewer starts looking away or gets distracted by other activities (like clicking a pen or doodling away). As such, peer review becomes a lived experience that can be recorded and replayed to enrich writing as well as rewriting.

In their article, “Embodied Composition in Real Virtualities: Adolescents’ Literacy Practices and Felt Experience Moving with Digital, Mobile Devices in School,” Ehret and Hollett (2014) reveal that previous accounts of students’ composing with new media have focused on the artificial bifurcation of the body and the screen. They argue that composition instructors should see literacy studies’ constant undergoing of a transition to be the default movement (Ehret & Hollett, 2014, p. 447). Such transition leads to emerging alternative perspectives on the use of mobile technology in the writing classroom. This kind of pedagogical attention to the body in digital, mobile composing activities is not only preferred but necessary in redesigning the modality of writing with an eye toward the emergence of wearables in the classroom. The exigence is even greater if we want the fields of rhetoric and writing to continue leading the discussions about cutting-edge technology in education.

Toward Augmented Reading and Composing

To Plato’s fears and Thoreau’s disdain, writing seems undesirable because it adds an additional separation between thoughts and its manifestation; writing is at least two steps removed from the actual idea, whereas speech is closer to the idea’s formation (Lewis, 2000). Yet, with the affordance of wearables like Google Glass, the gap between speech/orality and writing/literacy is bridged, supporting a more authentic-mediated communication or response that is otherwise inadequate in bare writing. A wearable device like Google Glass, when used in writing, intensifies the way the wearer speaks to, watches, reads from, or listens to—various ways of interacting with—any set of texts. The coordination and refreshed stimulation of senses, topped with the ability of wearables to perform seamless interactions between the virtual and physical environments, open up new and exciting possibilities for writing and communication. Even two decades ago, researchers at MIT were already studying and designing physically based hypertext that can be interacted with wearable reading or viewing devices:

Museum exhibit designers often have the dilemma of balancing too much text for the easily bored public with too little text for an interested visitor. With wearable computers, large variations in interests can be accommodated. Each room could have an inexpensive computer embedded in its walls, say in a light switch or power outlet. When a visitor enters the room, the wall computer can wirelessly download museum information to the visitor’s computer. Then, as the visitor explores the room, graphics and text overlay the exhibits according to his interests. (Starner et al., 1997, p. 7)

Today, augmented reality is no longer just a fantasy but a viable technology in our lives. With augmented reality, hypertexts can be associated with physical objects detailing instructions on use, and repair information, history, or information left by a previous user. Wearable’s interface can make more efficient use of any communicative resources, whether in the workplace, in school, at home, or among the general public.

As evident in this study, student peer review can be an even more immersive experience with wearables compared to mirrored screencasts and traditional written feedback. This small-scale study has reviewed the vital position writing instructors occupy in preparing students to read and compose in and for augmented environments. Certainly, wearables are not yet a technology accessible to many instructors and students (see “Financial support” in the next section for gaining institutional support). Yet, like any popular technology, a critical mass adoption is only a matter of time. When it comes the time for instructors to implement wearables in their classroom, it would be helpful to know what are some strategies for integrating them in their teaching. In the following section, I offer a short list of non-device-specific tips on how to deploy wearables in the writing classroom.

As foreshadowed in the logistics and facilitation discussion, a lot of preparations go unseen by the students when it comes to deploying wearables in the classroom. In this section, I outline some of the most important strategies for using wearables—in general, not just Google Glass—in the writing classroom, from managing the flow of equipment to hooking them up to the Internet to getting buy-in from departmental and technical support from information technology (IT) offices. I will then briefly discuss some of the challenges I have experienced with using Google Glass this semester and how I overcome them.

  • Logistics. There are several ways to go about finding a home for wearables so that you are not the main person responsible for keeping and guarding them. One avenue to consider is your institution’s library, smart media lab, or technology center. Collaborate with these units to set up checkout policies that would make it easy for students and instructors to obtain the wearables they need for class and return them after use. Depending on the needs of the instructor, a consent or information release form might need to be signed by the students, so their work can be used by the instructor for research or assessment purposes.

  • File sharing. Devise a secured way for students and instructors to transfer and share media files made in class without compromising the confidentiality of the work, especially those of the students. Cloud-based share drives (i.e., Google Drive, OneDrive, and Dropbox) and institutional LMS (i.e., D2L, Blackboard, and Moodle) are viable options.

  • Connectivity. What makes modern wearables unique is their ability to connect to the Internet, allowing the user to browse the web and use applications that are network enabled. When I first deployed Google Glass, our university’s network was not allowing the device to sign on to its network because it requires a two-step authentication, in which Google Glass was not able to perform. In the early phase of research, WRT team members each used their personal hotspot from mobile carrier to enable network connection in the classroom (which consumed the researcher’s personal data quota). Thus, it is important to consider how might the students connect their wearables to the university network before integrating them into your curriculum. Some devices, such as Pebbles, iPod, and Fitbit, are easier to go online, than more sophisticated devices like Oculus Rift and Google Glass.

  • Writing with wearables. Consider all the affordances of the wearable you plan to deploy in your writing classes, don’t be restrained by the conventions of composing. My experience tells me that it takes awhile to get used to any new device, so do give yourself some time to test out the wearable—take it home and use it extensively—so you would be comfortable with it when deploying it in class. Get familiarized with the operating system as well as the resources available to help you learn how to use and configure the device. These resources should be made known to students as well. Be sure to build in orientation session for students to become acquainted with the device and ask questions about its functionality before using them for major writing activities.

  • Technical Support. Get buy-in from your institution’s technical units such as the provost administration, classroom or academic technology support, and IT offices by discussing, validating, and using their collaborative experiences to build a foundation for wearables collaboratory. These collaborations will prove to be beneficial especially in classroom support, research initiatives, and devising plans for advancing instructional technology and design.

  • Financial Support. Wearables are undoubtedly too expensive for any college classrooms at the time of this writing. Grants are the most viable solution for most institutions to secure a wide-scale adoption and deployment of wearables in the classroom. Forming institutional and cross-departmental collaboratory units are a way to share budgets and encourage expansion in fiscal allocation. More importantly, researchers and instructors should align their wearables adoption rationale with their respective institutional goals. External sources such as Donor Choose and Digital Wish Grants (for smaller scale technology adoption in K-12 settings) are possible to receive funding. Of course, one might also find unique opportunities on aggregators like Grants.gov and Grantwatch.com.

For instructors who are interested in deploying wearables in their classes, these strategies are worth the considerations to maximize the efficiency of deployment during the class time.

Challenges With Google Glass

It goes without saying that Google Glass has not achieved a stabilized state where the device is ready to be adopted by the mass market. When deploying it in my first-year writing class, I have experienced various issues that crippled my deployment of it in the classroom:

  • Internet connectivity. As mentioned in the previous section, getting on the university’s network was a challenge I have experienced in my Google Glass deployment. I could allow students to pair their devices with my phone’s hotspot, but the connectivity gets weak when the entire class tries to surf the net at once. As such, I have forgone having students get online with Google Glass and designed activities revolving around an off-line setting. This, undoubtedly, undermines the capability of the device and so the research team is working on resolving this issue.

  • Physical discomfort. Especially with newer innovations, we as instructors need to watch out for the potential discomfort the new device may cause to the user, be it physical or psychological. In my case, I have learned Google Glass gets heated up after a period of time, especially when it is used to record a long video. Students have expressed discomfort with wearing it for the whole class session. Consequently, this problem forces me shorten the peer-review session and rethink different ways to facilitate the session.

  • Conceptual modeling. Conceptual models are our go-to mental simulations and mapping for relationships between the parts of a designed object when we encounter it. Wearable devices tend to lack clear conceptual models due to their pioneering design. What I have noticed when I was first introducing Google Glass to my students is that they did not have a clear mental model for the device—was it a pair of glasses, or computer, or smartphone, or headset? Thus, it was challenging for them to visualize the relationships between Google Glass’s gestural commands and the operating system’s interface, let alone using it to accomplish a task like peer review. To help them acquire and get acquainted with such mental modeling, I have put up the diagram illustrated in Figure 1 each time we conduct a peer review using Google Glass.

In summary, wearable technology is not just a popular fad; it is part of the next evolutionary steps in personal computing that have enormous effect on how we communicate with one another, including writing. As writing specialists, we are at the cusp of adapting to new pedagogical approaches and tools as they present themselves to us in the age of technological advancement. As writing teachers, we have no greater calls than to help our students to become successful communicators and citizens of their time. Wearable technology has emerged as the up and coming innovation that we cannot afford to ignore and should thrive to design curriculum that responds to the changing landscape of writing and writing instruction. This essay has showcased a brief history of the development of wearable technology, introduced the implementation of a wearable technology, namely Google Glass, in the writing classroom, and suggested ways such wearable device could remediate peer-review activities.

Given my first experience with the Google Glass deployment and students’ reaction to using the device in their peer review, I am motivated to further investigate the affordances of wearables in the peer-review process and how they might enhance the overall writing experience. For colleagues interested in computers and writing, future studies may ask the following questions:

  • How can we measure student satisfaction in using wearables as part of their writing instruction? In hindsight, is student satisfaction a critical factor in determining the efficiency of an instructional design?

  • Do students take video feedbacks to heart, and how do we know? Is there a way to figure out the optimum video length for feedback? “Richness” of video content? What could be our empirical evidence?

  • What are some potential drawbacks for using video feedback rather than written or in-person comments?

  • How can we add/increase interactivity in student peer review using wearables?

These few questions serve as a point of departure to build on the limited body of research on wearable technology in higher education, highlighting a critical need for greater focus on reframing writing pedagogy and digital literacies as we prepare students to enter the 21st-century workforce, where complex technology and systems are already a commonplace.

Using Google Glass During Peer Review

Glass affords a new dimension to our writing experience by augmenting the writing and revision processes. While a traditional peer review requires students to review the writings of other student writers’ and respond to prompts given by the instructors, such practice is driven mainly by the written texts. Using Glass, students could track comments and editing suggestions through video recording. Reviewers may indicate places in the writing where changes are recommended by snapping a picture or recording the suggestions in video. These images and videos could be sent to the respective writers after the review session. Such affordance adds value to the review process, one that enriches writer–reviewer exchange and collaboration.

Logistics: How to facilitate the augmented peer review

Students will be put into groups of two and each be assigned two sets of student writing. Depending on the number of students in the class, each group or student may be loaned a pair of Google Glass. After an initial orientation on Glass’s picture and video-capturing functions, students would use Glass for the entire peer-review process:

  1. Put on Glass and start recording a video. See tutorials attached for ways to extend a recording and snapping a picture while recording a video.

  2. Make sure the lens reflects a recording that’s close to the reading level.

  3. Start by announcing your name and the author’s name, something like: “I am Jason and I am now reviewing Justin’s first major writing assignment.”

  4. Then, continue by speaking aloud as you review the paper. You don’t have to talk all the way through the paper but remember to verbalize your thoughts on different parts of the paper as guided by the peer-review prompts provided by the instructor.

  5. Fill out the reviewer sheet as you go. Don’t keep it until you have finished recording to fill them out.

  6. Once you are done reviewing a paper, announce that you have completed that review process: “That’s all I have for you, Justin. I have finished reviewing your essay.”

  7. Stop the recording. Be careful not to delete it by accident.

  8. Then, repeat Step 2 to 7 for the second student writing.

  9. Once you are done reviewing both papers, turn Glass off and return it to your instructor.

  10. Your instructor will let you know when you should return the reviewer sheets to their respective student authors. Don’t return them until you are told to do so, as other students might still be recording their reviews.

What to do if you don’t know if your Glass is working right? Always ask for help from your instructor. Do not try to force the Glass to work. The Google Glass is an expensive device, thus it should be given utmost care when using it.

Tutorial 1: Taking Pictures With Google Glass

Glass is equipped with a 5 MP camera and some software improvements, like HDR, to detect low-light situations and automatically capture a brighter, sharper picture. Best of all, this software works even in tough situations where there are moving subjects.

There are five ways to capture a picture:

  • From the Home screen, say “ok glass, take a picture.”

  • Press the camera button above your right eye to capture a picture.

  • The viewfinder.

  • The Wink feature.

  • Glass gestures. Tap the Home screen, swipe to find either the Show viewfinder or Take picture card, and select.

After snapping a picture, you’ll see a brief preview of the shot you’ve just taken before it is saved to your timeline. Tap the touchpad while the preview is on-screen or say “ok glass, share with …,” to share the picture with one of your contacts or Google+ circles.

View your pictures

To view the picture in your timeline, swipe forward on the touchpad from the Home screen. Glass will make a collage out of pictures, videos, and vignettes taken during a day and bundle them together.

Tap into the photo bundle to browse, and then on a picture to share or delete it from your timeline. You can also find the picture in your Recently Added album on Google+ once Glass has a chance to sync your pictures.

Tutorial 2: Capturing a Video With Google Glass

Capturing video clips with Glass works similarly to taking a picture:

  • Press and hold the camera button above your right eye for 1 second.

  • From the Home screen, say “ok glass, record a video.”

While recording, you’ll see what is being recorded by Glass in your display as well as a time counter for your video recording.

Extend a recording

Glass defaults to recording video clips of 10 seconds. To continue recording past the default limit, do either of the following while the video is in progress:

  • Press the camera button.

  • Tap the touchpad and then Extend video.

You’ll know it works when the progress bar along the bottom of the display disappears. The video will then continue until it’s stopped or you run out of storage or battery.

Stop a recording

To stop recording your video, tap the touchpad and swipe through the card actions to select stop recording. If you extended your recording, the camera button can then be used to stop the recording.

View your recording

To view the video in your timeline, swipe forward on the touchpad from the Home screen. Glass will make a collage out of pictures, videos, and vignettes taken during a day and bundle them together. Tap into the bundle to browse and then on a video to play the video. Alternatively, swipe forward to share or delete the video from your timeline. You can also find the video in your Auto Backup album on Google+ once Glass has a chance to sync your pictures.

I am grateful to the students in my first-year writing course for allowing me to use their appearances, responses, and narratives in this essay. I am also thankful to the anonymous reviewers for their helpful feedback and revision recommendations.

All names used in this essay are pseudonyms modified to protect students’ confidentiality.

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: I thank the University of Minnesota College of Liberal Arts (CLA) for their generous funding of the project titled “‘Reframing’ Writing Pedagogy and Digital Literacies Across the CLA Curriculum,” an Academic Innovation Grants initiative (S14-12AI) secured by co-principal investigators Ann Hill Duin and Joe Moses. This funding provided for the Google Glass devices and support of undergraduate student participation on the research team. The Academic Innovation Grants Program exists to encourage faculty and department innovation in teaching and learning through the application of instructional technology. Funding is typically granted for:

  • Equipment and software licenses to be used by students or instructors in the classroom.

  • Equipment and software licenses used to develop new or improved methods and materials for instructional use both in and out of the classroom.

  • Wages for student assistants to create technology-enhanced resources that service multiple courses or a large population of students.

  • Contract fees for professional technology service providers. (for more information, see: http://claoit.umn.edu/grants/academicinnovation.php)

Alto, P. (2014). Fitbit accounted for nearly half of global wearable band shipments in Q1 2014. Canalys.com. Retrieved from http://www.canalys.com/newsroom/fitbit-accounted-nearly-half-global-wearable-band-shipments-q1-2014.
Google Scholar
Anson, C. M., Schwegler, R. A. (2012) Tracking the mind’s eye: A new technology for researching twenty-first-century writing and reading processes. College Composition and Communication 64(1): 151171.
Google Scholar | ISI
Ball, C. E., Arola, K. L. (2010) Visualizing composition 2.0, Boston, MA: Bedford/St. Martin's.
Google Scholar
Baumeister, R. (2012) Willpower: Rediscovering the greatest human strength, New York, NY: Penguin Books.
Google Scholar
Breuch, L.-A. K. (2004) Virtual peer review: Teaching and learning about writing in online environments, New York, NY: SUNY Press.
Google Scholar
Brey, P. (2000) Technology as extension of human faculties. In: Mitcham, C. (ed.) Metaphysics, epistemology and technology. Research in philosophy and technology Vol. 19, Amsterdam, The Netherlands: Elsevier, pp. 5978.
Google Scholar
Bruffee, K. (1973) Collaborative learning: Some practical models. College English 34: 634643.
Google Scholar | Crossref
Bruffee, K. (1993) Collaborative learning: Higher education, interdependence, and the authority of knowledge, Baltimore, MD: The Johns Hopkins University Press.
Google Scholar
Buchanan, M. (2013). Glass before Google. The New Yorker, Newyorker.com. Retrieved from http://www.newyorker.com/tech/elements/glass-before-google.
Google Scholar
Cho, K., Schunn, C. (2007) Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers & Education 48(3): 409426.
Google Scholar | Crossref | ISI
Cho, K., Schunn, C., Wilson, R. (2006) Validity and reliability of scaffolded peer assessment of writing from instructor and student. Journal of Educational Psychology 98(4): 891901.
Google Scholar | Crossref | ISI
Coldwell, W. (2012). Who dare films: Why extreme-sports fans love helmet-mounted cameras. The Independent, Independent.co.uk. Retrieved from http://www.independent.co.uk/life-style/gadgets-and-tech/features/who-dares-films-why-extreme-sports-fans-love-helmet-mounted-cameras-7637440.html.
Google Scholar
Dean, C. (2009) Developing and assessing an online research writing course. Special issue on Writing Technologies and Writing across the Curriculum. Across the Disciplines 6. Retrieved from http://wac.colostate.edu/atd/technologies/dean.cfm.
Google Scholar
de Guerrero, M. C. M., Villamil, O. S. (2000) Activating the ZPD: Mutual scaffolding in L2 peer revision. The Modern Language Journal 78: 484496.
Google Scholar | Crossref | ISI
Ehret, C., Hollet, T. (2014) Embodied composition in real virtualities: Adolescents’ literacy practices and felt experiences moving with digital, mobile devices in school. Research in the Teaching of English 48(4): 428451.
Google Scholar | ISI
Elbow, P. (1973) Writing without teachers, New York, NY: Oxford University Press.
Google Scholar
Elbow, P. (1981) Writing with power: Techniques for mastering the writing process, New York, NY: Oxford University Press.
Google Scholar
Etherington, D. (2015) Pebble time is now the most-funded Kickstarter project ever. Techcrunch.com. Retrieved from http://techcrunch.com/2015/03/03/pebble-time-is-now-the-most-funded-kickstarter-project-ever/.
Google Scholar
Ferris, D. R. (2003) Response to student writing: Implications for second language students, Mahwah, NJ: Lawrence Erlbaum.
Google Scholar
Foreman, J. (2000). The leading edge of webcentric writing instruction. Centerdigitaled.com. Retrieved from http://www.centerdigitaled.com/converge/?pg=magstory&id=3386.
Google Scholar
Gayomali, C. (2014) Meet the USC journalism professor leading a course on Google Glass. Fast Company.com. Retrieved from http://www.fastcompany.com/3028476/whos-next/meet-the-usc-journalism-professor-leading-a-course-on-google-glass.
Google Scholar
Guardado, M., Shi, L. (2007) ESL students’ experience of online peer feedback. Computers and Composition 24(4): 443461.
Google Scholar | Crossref
Hawkins, T. (1976) Group inquiry techniques for teaching writing, Urbana, IL: ERIC/NCTE.
Google Scholar
Holsanova, J., Rahm, H., Holmqvist, K. (2006) Entry points and reading paths on newspaper spreads: Comparing a semiotic analysis with eye-tracking measurements. Visual Communication 5(1): 6593.
Google Scholar | SAGE Journals
Jin, L., Zhu, W. (2010) Dynamic motives in ESL computer-mediated peer response. Computers and Composition 27(4): 284303.
Google Scholar | Crossref
Kannon, J. (2013). Wearable technology grows in popularity. Survey Sampling Inter– (SSI). Retrieved from https://www.surveysampling.com/about/news/2013/wearable-technology-grows-in-popularity/.
Google Scholar
Kock, N. (2001) The ape that used email: Understanding e-communication behavior through evolution theory. Communications of AIS 5(3): 128.
Google Scholar
Kolowich, S. (2015) Could video feedback replace the red pen? The Chronicle of Higher Education. Retrieved from http://chronicle.com/blogs/wiredcampus/could-video-feedback-replace-the-red-pen/55587.
Google Scholar
Kress, G., van Leeuwen, T. (2001) Multimodal discourse: The modes and media of contemporary communication, New York, NY: Oxford University Press.
Google Scholar
Leijten, M., Van Waes, L. (2013) Keystroke logging in writing research: Using inputlog to analyze and visualize writing processes. Written Communication 30(3): 358392.
Google Scholar | SAGE Journals | ISI
Lewis, P. (1988). The executive computer; a top machine carries a top price. The New York Times, Nytimes.com. Retrieved from http://www.nytimes.com/1988/01/10/business/the-executive-computer-a-top-machine-carries-a-top-price.html.
Google Scholar
Lewis, V. (2000) The rhetoric of philosophical politics in Plato's Seventh Letter. Philosophy and Rhetoric 33(1): 2338.
Google Scholar | Crossref | ISI
MyReviewers (2015). Research. MyReviewers.com. Retrieved from http://myreviewers.com/research/.
Google Scholar
Mulder, R. A., & Pearce, J. M. (2007). PRAZE: Innovating teaching through online peer review. In ICT: Providing choices for learners and learning. Proceedings of the 24th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 727–736), Singapore.
Google Scholar
National Council of Teachers of English. (2004). CCCC position statement on teaching, learning, and accessing writing in digital environments. Retrieved from http://www.ncte.org/cccc/resources/positions/digitalenvironments.
Google Scholar
Paulson, E. J., Alexander, J., Armstrong, S. (2007) Peer review re-viewed: Investigating the juxtaposition of composition students’ eye movements and peer-review processes. Research in the Teaching of English 41(3): 304335.
Google Scholar | ISI
Pickering, M. J., Frisson, S., McElree, B., Traxler, M. J. (2004) Eye movements and semantic composition. In: Carreiras, M., Clifton, C. (eds) The on-line study of sentence comprehension: Eyetracking, ERPs, and Beyond, New York, NY: Psychology Press, pp. 3350.
Google Scholar
Rhodes, B. (n.d.). A brief history of wearable computing. MIT Wearable Computing Project, Media.mit.edu. Retrieved from https://www.media.mit.edu/wearables/lizzy/timeline.html.
Google Scholar
Rhodes, B., Mase, K. (2006) Wearables in 2005. IEEE Pervasive Computing 5(1): 9295.
Google Scholar | Crossref | ISI
Russell, A. A. (2004). Calibrated peer review: A writing and critical thinking instruction tool. Invention and Impact: Building Excellence in Undergraduate Science, Technology, Engineering and Mathematics (STEM) Education, by American Association for the Advancement of Science. Retrieved from http://www.aaas.org/publications/books_reports/CCLI/.
Google Scholar
Sayed, O. (2010) Developing business management students’ persuasive writing through blog-based peer-feedback. English Language Teaching 3(3): 5466.
Google Scholar | Crossref
Selber, S. A. (2004) Multiliteracies for a digital age, Carbondale: Southern Illinois University Press.
Google Scholar
Selfe, C. (2009) The movement of air, the breath of meaning: Aurality and multimodal composing. College Composition and Communication 60(4): 616663.
Google Scholar | ISI
Selfe, C., Selfe, R. (1994) The politics of the interface: Power and its exercise in electronic contact zones. College Composition and Communication 45(4): 480504.
Google Scholar | Crossref | ISI
Shannon-Missal, L. (2013). Are Americans prepared to sport wearable tech? Harris Interactive. Retrieved from http://www.prnewswire.com/news-releases/are-americans-prepared-to-sport-wearable-tech-230786341.html.
Google Scholar
Shipka, J. (2011) Toward a composition made whole, Pittsburgh, PA: University of Pittsburgh Press.
Google Scholar | Crossref
Spinuzzi, C. (2005) The methodology of participatory design. Technical Communication 52(2): 163174.
Google Scholar | ISI
Stanley, J. (1992) Coaching students writers to be effective peer evaluators. Journal of Second Language Writing 1: 217233.
Google Scholar | Crossref
Starner, T., Mann, S., Rhodes, B., Levine, J., Healey, J., Kirsch, D., Pentland, A. (1997) Augmented reality through wearable computing. Presence 6(4): 386398.
Google Scholar | Crossref
Thompson, C. (2014) The pocket watch was the world’s first wearable tech game changer. Smithsonian Magazine. Retrieved from http://www.smithsonianmag.com/innovation/pocket-watch-was-worlds-first-wearable-tech-game-changer-180951435/?no-ist.
Google Scholar
Thompson, R., Lee, M. (2012) Talking to students through screencasting: Experimentations with video feedback to improve student learning. Journal of Interactive Technology and Pedagogy 1. Retrieved from http://jitp.commons.gc.cuny.edu/talking-with-students-through-screencasting-experimentations-with-video-feedback-to-improve-student-learning/.
Google Scholar
Thorp, E. (1998). The invention of the first wearable computer. In Proceedings of Second International Symposium on Wearable Computers (pp. 4–8). Pittsburgh, PA.
Google Scholar
Tsui, A. B. M., Ng, M. (2000) Do secondary L2 writers benefit from peer comments? Journal of Second Language 9(2): 147170.
Google Scholar | Crossref
Villamil, O. S., de Guerrero, M. C. M. (1996) Assessing the impact of peer revision in L2 writing. Applied Linguistics 19: 491514.
Google Scholar | Crossref | ISI
Warwick, K. (2000). Cyborg 1.0. Wired, 8(2), Archive.wired.com. Retrieved from http://www.wired.com/2000/02/warwick/.
Google Scholar
Webster, A., Feiner, S., MacIntyre, B., Krueger, T., & Massie, W. (2001). Augmented reality in architectural construction, inspection, and renovation. Proceeding of ASCE Third Congress on Computing in Civil Engineering, 913–919.
Google Scholar
Weiss, A. (2015) How wearable devices are fare among students. PBS. Retrieved from http://mediashift.org/2015/03/how-to-embrace-innovative-learning-opportunity-with-wearables/.
Google Scholar
Wysocki, A. F., Johnson-Eilola, J., Selfe, C., Sirc, G. (2004) Writing new media: Theory and applications for expanding the teaching of composition, Logan: Utah State Press.
Google Scholar
Yancey, K. (2009). Writing in the 21st century. National Council of Teachers of English. Retrieved from http://www.ncte.org/library/NCTEFiles/Press/Yancey_final.pdf.
Google Scholar

Author Biography

Jason Chew Kit Tham is a PhD student in the Rhetoric and Scientific and Technical Communication program at the University of Minnesota—Twin Cities, where he teaches first-year composition and technical and professional writing. He studies how emerging technologies invite different ways of thinking and learning and the increasingly intense flow of information occurring among people and machines. One of his long-term projects is investigating the scale and intensity of interconnected complex learning networks in the digital communication context.