The ‘Snapshot of Yourself’

Hi Everybody! I hope you’re all having a productive semester. It’s good to be back with the Writing With Machines gang!

The first reading assignment from Warnock’s text was a great opener to the purpose of OWcourses, as I continue to think about how to replicate the face-to-face experience via online.

The most valuable pieces of advice that Warnocks offers to understand the difference between ‘Responses versus Grading’ and keeping feedback ‘Conversational versus one-way-announcements.’

I think these are the types of interactions I feel comfortable doing in the classroom, and always questioning how to bring these interactions into OW courses. Of course it all comes down to the kinds of technology that can help manipulate more personal instructor-student experiences concerning the writing process.

A few things I already practice with students when online feedback is provided are what Warnock calls ‘in-text markers.’ At times I give students the opportunity to submit a thesis statement or body paragraph via email and most of my feedback comes in the form of highlighted comments, underlined areas of focus, arrows for direction and so forth. I find these simple tasks pretty easy for students to visually read suggestions and questions regarding assignments.

To push beyond markers, I do want to remind myself the importance of the ‘first week icebreakers’ as Warnock mentions, the ‘snapshot of yourself’ which is vital for students to feel I’m present, or more so to see myself as their AUDIENCE.  Which leads me to the ‘technologies of responses’ and how to use these resources to avoid blood shot eyes from hours of computer watching and carpal tunnel. I love the idea of using spoken comments and audio visual comments. I’m interested in exploring what apps or programs are out there that work best for providing such feedback. Any suggestions folks?

But I find VOICE and Facial Gestures, heck even hand gestures as a way to communicate my personality to students over the sterile typed comment. I know a few students have submitted their essay to the Writing Center via email and they’ve received some great video responses from writing consultants. The feedback is great since students can stop and play these videos, as the videos scroll through their essays and there’s audio in the background offering feedback. So I’ll definitely be investigating what methods the Writing Center uses to make these videos. Also I need to evaluate how time consuming it is to make videos and which apps/programs are more efficient.

If anybody has some audio/ video app suggestions please send them my way. I’m not the most tech savvy person, but I’m always down to learn!

Thoughts on Feedback — Week 1

The thread connecting the writing issues that most frequently prompt my response — logical gaps, weak support, and irregular clarity — is my role as the writer’s audience. When I ask marginal questions that begin with how or why, or when I say “this sentence is unclear to me,” I am trying help the writer understand that I am interested in their argument, and am pointing out where I become skeptical or confused. This kind of feedback is most effective in onsite classes when it is supported by a face to face conference.

Having read this week’s assigned chapter, I think that audiovisual resources, such as a Camtasia recording, would be an effective method of providing end comments and  drawing attention to specific areas in the text. I could use Camtasia in conjunction with the color highlighting tools available with the Speed Grader function in Canvas. As Warnock cautions against feedback becoming “mechanical” and “inauthentic” (126), I think voice and video responses would keep that from happening.

In addition, I would like to experiment with recording my feedback using my tablet and stylus. Perhaps the handwritten marks and notes, in addition to my voice comments, might foster a more personal, less mechanical, experience for the student.

In the past I have used the comments bank in Turnitin, and I currently use my grading rubric as a Google Doc scoring sheet that I fill out for each student’s paper and share the link with them. This practice has been effective, especially in reinforcing my expectations for their writing. But in an effort to increase efficiency, I plan to check out the different programs listed by Warnock on page 126.

Testing the Waters: A Playful Approach to Writing Response Tools

Sometimes I write too many notes on a student’s paper when I grade them electronically. It’s sad, but true. I own it. I’ve been trying to pull back, telling myself “Kellen! Don’t overwhelm them! Be judicious, but, for everyone’s sake, be pithy, mate!” That’s why I’m excited to test out some of the tools Scott Warnock glosses to see if I can give more detailed feedback in a way that is more consumable for students. Namely, I wanna try these three in particular: macros (cuz they seem easy, right?), rubric software (cuz I love me a rubric), and Camtasia (cuz wouldn’t it be nice if I could just say things?). Oh! I’m also gonna totally just add a quick something about peer review stuff at the end.

I can totally get on board with Warnock’s assertion that all writing in an OWcourse is an opportunity for improvement—an opportunity for you to respond to a student in real time without the looming pressure of a grade. However, I am equally wary of burning out—of providing really stellar feedback to the first few papers before becoming increasingly fatigued until I just quit in frustration and watch Netflix. Of course, according to Warnock, I could just create what are called ‘macros’ to input common comments using ordinary keyboard shortcuts. Stunning! I love shortcuts. But, I’m not gonna lie. I tried for a solid 30 minutes without figuring it out. So, if anyone knows how and wants to walk me through it, I’d be so appreciative!!

After failing to master macros, I moved on to playing with the rubric software that Warnock briefly discusses. Unlike the macros, these felt much more intuitive and not terribly different than the rubrics I make on Word. Well, at least RubiStar because that was the only one I could easily access. It’s free, which is something I love. That said, it looks like Rubrix looks to be a viable, perhaps sleeker version of it that is more responsive to the most recent technologies. However, it costs money so I wasn’t able to actually do more than watch an informational video that is fully of a lot of fun cartoons that will tell you about how great rubrics are (5 Stars: would watch again!). Either way, I think these both provide really exciting possibilities for providing visually organized ways of expressing expectations without overwhelming the student (or yourself) with comments. I literally just passed out rubrics this week, so I may try to implement one of these and pilot it with my students this semester to see how they respond to it.

Finally, I am most excited to try Camtasia. I didn’t actually get a chance to explore it yet, but I hopefully will soon. A representative from the writing center came to my class Thursday and told me more about how they use it for students and have seen good results from it. In terms of an OWcourse, I think this is an especially great tool to cultivate an online persona. This can be particularly important for rough draft stages. Being able to create a file that students can consistently refer back to in order to hear your feedback in real time (…well in simulated real time…) sounds incredibly productive to me.

Getting to speak directly to your students in this way can help establish a more comfortable environment in which to receive feedback. I try really really hard in my comments to make sure I sound upbeat and positive, which can lead to a lot of extra words. By being able to just vocalize these feelings, I can save myself time while still be encouraging. Most importantly, I can customize my feedback for every single student, which feels completely less viable in other formats. Maybe this could even be a way of addressing an issue that Megen raises about potentially feeling like a “brick in a wall” in her post. Through this tool, you can start talking to students through their writing in a personal way from brainstorming to outlining to drafting to revising. By making these videos really personal and attentive to the nuances of each students writing, we might be able to recreate some of that student-teacher interaction found in f2f classes.

To conclude, I wanna talk about peer review software. I’m a firm believer that teaching our students how to be effective peer reviewers will make them stronger writers. I try to incorporate peer review workshops into my f2f courses pretty regularly. Learning how to identify issues in other people’s writing can help us recognize those same problems in our own. However, while Warnock offers tons of great examples for how to recreate peer review in virtual spaces—wikis, blogs, Waypoint, etc. The idea of creating small working groups that would be organized via Canvas sounds the easiest to produce, so I’m going to try to pilot it this semester for their last essay (when they’ve hopefully had ample in-class practice with peer review). I’ll be sure to report results!

In the end, these tools sound like really great ways of improving our ability to respond to students while also saving ourselves some time. Without a doubt, there is an air of utopianism attached to some of these tools that I’m sure I’ll be disabused of when I start implementing them more fully. In that interim, I’m going to start trying to incorporate some of these elements this semester in my f2f classes to test how their limitations and affordances.

Assessment in Advance: Fostering Anti-Authoritarian Feedback

Reading through Chapters 11 and 12 of Teaching Writing Online has made me realize how much my pedagogy already reflects many online teaching methods, even though I’ve not yet taught an online course. (It’s funny, this week, one of my student asked me to walk her through our Canvas site because she was still confused. Afterwards, she thanked me for helping her because “I haven’t taken an online class before.” The comment seemed bizarre to me, and she didn’t qualify it in any way. Reading the chapter helped me better understand what she meant, though. I take for granted a lot of the tools in my on-site classes that, really, are also suited for online teaching.)

Commenting through Canvas

Overall, my main form of assessment and feedback have been via Canvas’s built-in commenting functions (SpeedGrader, is it called?). I use the comment and highlighting aspects of it. Before I transitioned to Canvas last year, I used Blackboard’s similar function for a couple of years (whatever that one was called). Before that, I was handwriting comments on paper copies. I have found that I do end up putting a lot more feedback on the digital versions, mostly because I type faster and find it easier to think when I’m at a computer (oddly enough). I do think I have a tendency to overwhelm students, so would like to find new ways to approach commenting. 

I also allow students to email me versions of their essay in advance for me to comment on, which somewhat minimizes my commenting on the final draft. I prefer, too, to get trough drafts digitally to office hours, which I feel is a bit taboo, but I think the feedback is better and that I don’t actually have to sacrifice a lot of the conversation. 

However, I also try to build in a lot of assessment and feedback before essays are even submitted.

Assessing in Advance

Digital tools (and specifically Canvas, in this case) have been really useful to me when it comes to making sure students can assess their own writing without too much intervention from me (similar to what Warnock says: “good teachers can facilitate discussions onside or online that feature students prominently, but at times, students need your guiding hand” (125)). I’m a big fan of learning through example, which I’ve built into my classes by simply having students put their essay drafts on Canvas discussion boards. This allows them to explore in a way unanchored from hard draft peer review (and from peer review in general, which students often find a drag–though I still do it, since it allows me to also hold informal office hours during class time). 

I’m skeptical of concepts of standards, because they are so top-down; using forums and discussions available digitally, however, create bottom-up “standards” that students can engage with. Students, then, absorb organizational structure, citing conventions, and other expectations simply by having a digital portfolio of student writing available to them. They see what is available to their writing, rather than being told (since the latter rarely ever works, at least for me).

Along with minimizing my own role as a figure of authority in favor to student-to-student learning, I also like Warnock’s idea of seeing my role as one of engaging in conversation than simply bestowing summative comments. I use digital tools to do this to some degree as well; I assign “reading responses” that are also posted on public discussion boards. These reading responses usually address aspects of the essay or attempt to develop skills relevant to the essay (without mentioning “the essay”); especially for about the first half of the semester, I respond at length to these posts, focusing on encouraging student ideas and avoiding too much “fixing” grammar or critiquing ideas. Many of the ideas ultimately end up in the essay and, therefore, I have already added my assessments in ahead of time. This is of course all possible in paper copies, but the conversation is then much less public and the record of the conversation disappears to quickly, for both me and my students.

This has also made me think of how I could use audiovisual means to further support this conversation, making it more of a “f2f” type of conversation rather than “textual” in the way Warnock describes (although, his definition of “text-based” is both fascinating to me and also a bit limiting). I am a little wary of audiovisual means of communication; they seem too performative and awkward to me, but if more natural conversations could be facilitated, I’d want to adopt more of those means (maybe through Skype or something like that, but overall I’ve mostly rejected tools like that. I have used Google hangouts when I worked at a graduate writing center, but it just felt so clunky and unnatural).

A lot of my strategies are, then, preemptive rather than tied to a particular moment of “assessment.”

In terms of reflecting a bit on the future (though the future is scattered throughout everything I’ve written already), I would like to figure out a way to use tools to better assess one of the more important aspects of student writing to me, close reading. Things like comments (whether voice, video, or traditional) and quizzes don’t seem to be particularly useful for getting students to really understand how to better close read. Often, when my students revise their essays, the close reading still remains lacking. This is one area I’m still at a loss to figure out, and Warnock doesn’t have many answer I like (comment banks and macros are rather frightening possibilities to me–they are so impersonal).

The theme in this post seems to be that I strive for a form of assessment that feels natural, decenters my authority, and is personal rather than mechanized.

Never Just Another Brick in the Wall: Genuine Online Response and Feedback

As Warnock humbly admits (137), so shall I, too: I give a lot of feedback, probably much more than is necessary. In my f2f classes the vast majority of my feedback is handwritten—I collect hard copies of my students’ assignments and write comments on the margins and spaces throughout. Additionally, I like to compose an end-of-reading reflection paragraph encompassing my major points for consideration. I also provide students with a rubric showing them where their paper falls on the argument, development, organization, language/mechanics, and various other assignment-specific criteria. I like giving my students this variety—if one student is very cerebral and prefers the exact numbers, they can focus on the rubric, which is also useful towards showcasing the course standards. The comments and reflection paragraph are more specific and detailed to the students’ strengths and areas that could use some focus.

However, all this handwriting is exhausting. My typing speed is around 80 wpm—my writing speed, on the other hand, is probably something horribly slow like 11 wpm. With writing taking over seven times longer than it takes to type the same comment, I’m long-overdue moving towards these newfangled grading programs.

Tools:
I still think there is virtue in handwriting, specifically because it takes longer for me to write than to type. Handwriting makes me think carefully over how to comment, which means I usually write a bit more considerately than I would were I to type out the response instead. I never want to compromise the integrity of my feedback, especially when my students genuinely read and care about my observations. However, maintaining this care becomes a challenge when my hand starts cramping. And my head gets achy. And my wrist becomes sore. And oh, let’s not forget the stiff neck! We’ve all been down that road—I know many of us practically have timeshares on that street! Alright, the metaphor is running away from me, but my point is that grader’s fatigue is a very real thing that we all deal with. I’m hoping that electronic response can help to alleviate some (if not most) of it.

Since I still provide mostly handwritten feedback, I’m still new to these tools. However, here are a few I’m interested in:

  1. SpeedGrader: This is the one tool that really seems to rule them all. I haven’t used it yet, but I’m excited to start and have grandiose plans. I especially love SpeedGrader’s comment feature, view rubric feature, and media file attachment feature—this last in particular is the tool of my dreams, because it translates directly into AV comments! More on that later.
    1. The rubric is lovely because students are able to specifically see how many points they achieved for each criteria. I like how dynamic the rubric is—you can give comments as well as show where each rubric score falls on a spectrum.
    2. The comments are of course extremely useful, and probably the most helpful tool available for English instructors. If some evil magician robbed me of all my methods of responding to student writing save one, I’d hope he’d leave me with my commentary. I truly feel students need to see exactly where their papers do well and where they fall short—otherwise, they’re left just guessing, which isn’t conducive to the learning-growing writing process. Besides, who of us in the past hasn’t had a teacher or professor who gave notoriously confusing feedback? I had several myself, and would never want to be considered as such! Comments are so important, and the one feature that seems to pop up in most tech teaching tools.
    3. I also really like SpeedGrader’s draw and highlight tools—I see these as being particularly useful for syntax, spelling, and other language mechanics. I imagine I’d start off trying to note everything via the comment feature, but I would probably eventually use color coding for sentence craft. For example: yellow highlighting = run-on, purple highlighting = fragment, etc. The combination of typing my comments and using coding will help to greatly reduce the time I spend on feedback while simultaneously increasing the amount itself.
    4. Finally, there’s SpeedGrader’s record/upload media feature: AV feedback!!!! I’m ridiculously excited to use AV feedback, and am already considering it for my f2f classes. I absolutely love that you can use both audio and video recordings in SpeedGrader.

I think my tools use would be a combination of all of these: comments and draw/highlight for specific response throughout my students’ papers, rubric for explaining how they did criteria and standard-wise, and record/upload media for the end-of-paper reflection paragraph I compose.

… also, this last tool (the AV feature) is the answer to the biggest concern I’ve had regarding writing response: how can we encourage our students to actually read our electronic feedback?

The one reason it’s taken me so long to move to electronic evaluation is my belief in the genuine feelings handwriting transmits. Something that continues to surprise and embolden me is how much my students seem to actually read and consider my commentary. When I’ve briefly attempted electronic feedback in the past (mostly through the comment feature in Microsoft Word), students would at times ignore or fail to read my responses. This, wonderfully enough, hasn’t been much of a problem with handwritten critiques. I really do think there is something personal in each of our handwriting styles; a handwritten note, then, seems to reach my students a bit more directly.

I think providing students with even one minute of AV commentary can make a big difference between cold, almost robotic-sounding response and sincere, personal assessment. If they hear my voice, see my face, or watch a video of me going through their paper, I think they’ll be encouraged to pay more attention to my feedback. Warnock says he has a lot of success with AV feedback (131), and from a sociological standpoint, it makes complete sense. Therefore, when considering his success, I focused on two more tools I’m thinking of using when it comes to assessing student writing: Dragon and Skype.

2. Dragon NaturallySpeaking: I know very little about this program barring what I’ve heard from various colleagues at MiraCosta. It looks promising in that it allows a user to voice-to-text their comments. That’s all I know about it so far, other than I’d personally have to go through a serious learning curve to use it efficiently. Still, I definitely talk faster than I type, so it seems like a useful program.

3. Skype: I’m old-fashioned. I know there are spiffy new ways of conducting video-chats, but Skype—despite the occasional glitching—is fairly reliable. I’d like to use it to video conference students while reviewing their papers and going over revision reports. Again, anything to make the writing response process as personal as possible is my goal. In doing so, my hope is they’ll truly consider my feedback and advice.

Circling back, one last comment I want to mention is on Warnock’s advice to change our system of grading when teaching OWcourses. I think this absolutely makes sense and completely agree; there is going to be a lot more informal writing in an OWcourse, and I’d like to encourage my students to write as much as they can. Giving more weight to these informal responses seems like a very natural shift.

A risk in moving over to an OWcourse means students might feel like a brick in the wall or part of a machine (yep—I’ve had Pink Floyd stuck in my head throughout all of these chapter thanks to Warnock’s comments on robotic voices and inauthentic feedback!). Honestly though, I think this is a real risk. When taking online courses as a student, I never felt like my professors viewed or cared about me as an individual. As Warnock advises, by commenting on my students’ informal responses and weighing such posts, they will see that I am listening: I’m actually reading their posts, and my comments show I’m paying them attention.

A random idea I had during Chapter 10 was to create “icebreaker” posts for each lesson—I already take roll in my f2f classes by asking fun warm-up questions (like “where is the best place to get Italian food in north county?”). This proves, semester after semester, to be a lovely way to build classroom community. It’s also my sneaky way of getting them to relax and start talking. I think such an exercise would translate well into warm-up posts for each online session, and it wouldn’t take me very long to comment on them.

It is my hope that by implementing these practices paired with using AV that I’ll be able to develop just as much rapport with my OW students as I do in f2f classes.