Without my illegible handwriting, how will they learn anything?

I’m going to open with — I like my illegible handwriting in the margins of student papers.  I find it difficult to capture the same kind of flourish in the online environment.  And, I have, through years of repetitive thinking, convinced myself that my students find the scribbles endearing.

That said, I really do prefer to scratch it out on a physical surface.  I find that I can leave a more dynamic comment that way, literally drawing connections between disparate parts of a paper by … drawing.  I am also faster at leaving feedback in this format — at least, at this time I am — which our author brought up as a legitimate concern just in case some of us have a hundred or more students making similar mistakes in their writing.

To this end, I have been eyeing the new 12.9-inch iPad Pro, thinking that I might be able to approximate the physical grading in the electronic environment by using a stylus to write on the electronic copies students send.  I have been teaching online for 10 years, and I have spent a lot of time waiting for this moment when technology would finally catch up and allow me to return to a pre-technology form of grading.  Yet, last spring, during the first leg of this prep, we spent some time considering whether we should be trying to force our on-the-ground practices into the online environment unchanged or whether what we are really talking about is a translation of those practices.  In other words, we should be taking our best practices from our years of on-the-ground teaching and re-imagining them in the online environment.

So, I should be asking myself, How does my handwritten feedback translate to the electronic grading environment?

And, I think the answer is — it doesn’t.  What does translate is my commitment to substantive feedback.  So, what tools are available in the online environment that might not only facilitate the communication of feedback to students but enhance it?

One strategy I will use will be to reduce the amount of time I spend on low-end, repeated comments through macros.  If I can auto-fill the comments I make a million times across student papers, like those associated with punctuation and in-text citation formatting, I can spend more time on high-end feedback.  I have resisted this move because it has always felt like, well, cheating.  However, if I am writing the same comment fifty times in a single grading session, what’s the difference between my repeated handwritten note and the one that the computer fills in automatically?

Legibility.

Another tool that I plan to make use of is combining typed comments with voice comments.  The opportunity for this has existed for a while, but not the ease of it.  I trained in Canvas in the spring, and I am teaching my first two courses in this CMS this fall, and including voice comments while grading is integrated into this system and easy to use.  I like the opportunity to explain a comment I make with a quick verbal elaboration, rather than getting into typing out a lengthy response.  It’s what I would do if a student approached me in class to go over a bit of feedback he or she received.  I can also see using this feature for my global, end comments on papers.  Video feedback is also pretty easy to use through Canvas, but I am not convinced that it will provide something essential that I can’t accomplish with a combination of typed and voice feedback.

I don’t know that these strategies really affect my philosophy about providing feedback so much as begin to satisfy my concerns that feedback in the online environment has the potential to be less than the student needs.

Unit 1: Feedback and Assessment

This is such a big question, curry!  Feedback happens everywhere!! My current primary tools for providing instructive feedback are rubrics and the Suggesting feature in Google Docs, but this post will focus only on rubrics since Google docs seems pretty straight-forward to me. I do offer other types of feedback on low-stakes writing—reading responses and reflection activities, for example—but my intent in this feedback is to be encouraging, supportive, or just to let the students know that I am reading every word they write 🙂 . I think this type of feedback might fall into what Warnock refers to as response–when he distinguishes between response and grading. When/if I am able to teach online, I imagine that I will heavily use Canvas’s recording feature in addition to these tools–both to improve the variety of feedback students receive and to help personalize the online environment.

I saved samples of papers I’ve graded over the years. These are the papers that never got returned because students dropped the class or because they disappeared at somepoint during the semester. Writing this blog post, I became curious about how my comments have changed, so I reread my comments on some of these essays. I was struck by how much of my marginalia (Y E S ! I’ve always wanted to use that word in earnest!), focused on the specific assignment in my early years of teaching. Over the past dozen years, though, I see a shift, and my comments are largely about specific skills. I attribute this shift to when I started to hone my rubrics.

My philosophy with rubrics is that they are teaching tools. I believe students should learn from them not about not only their performance on the specific task but also how to improve specific writing skills. I break the rubric down into critical thinking, structure, evidence, sentences, grammar, vocabulary, and documentation. Here’s what my current English 100 rubric looks like for argumentative essays (I have a different rubric for a rhetorical analysis essay assignment and for our new lens essay assignment): http://www.writingteachertools.com/english-100-grading-rubric/

All this might be (actually, I’m sure it is) a long way to get to my focus of this post: optimizing the rubric tool in Canvas to improve not only my feedback but also students’ understanding of my feedback in onsight classes and (hopefully) in online classes.

I remember during our fall certification sequence somebody sharing research about how infrequently students actually read our comments—not much 🙁 —and have thought about why this might be ever since. One reason might be that students don’t understand what we mean by the comments. For example, if the comment is that the paragraph lacks development or focus or unity or coherence, I can imagine some of my students not knowing what the heck I am talking about—despite the fact that I feel like I’ve explained these concepts—in overwrought detail—a zillion times.

So . . . here’s my idea for using the rubrics tool in Canvas. I’m considering using them to maximize both peer- and self-evaluations at various drafting stages in order to reinforce student understanding of basic compositional concepts and/or their understanding of specific critical thinking skills. In f2f classes, I’ve been doing versions of the types of evaluations I’m imagining, but with pen and paper and perhaps with not as much clarity of instruction or accountability as might be possible with Canvas. My intent is that these peer- and self-evaluations promote deeper understanding of what sometimes seem to be amorphous concepts like development and unity.

One of the resources I found in curry’s annotated bibliography discussed various types of rubrics in Canvas: Single Point, Analytic, Primary Trait, Holistic (here’s a direct link to the PPT, if you want to check it out:

https://docs.google.com/presentation/d/10UnK0-9i-OaU7RGl66sukfCaHsIOBizK710xenlcuYg/edit#slide=id.p

I mocked up a few of these rubrics to concretize how I might use each of these rubric types.

Single Point – Use to improve thesis
Analytic – Use to improve Level One Unity* 
Primary Trait – Use to improve Level Two Unity*
Holistic – My final grading rubric

(No mock-up here!  I ran out of time!)

*These are teaching tools I created to help explain the concept of unity to students. You can check them out here and here.

Canvas Questions:  To implement these evaluations, I’m guessing that I create an Assignment that requires peer reviews and then attach the corresponding rubric. Does anyone know if this right, or am I missing something?

Also, does Canvas have a way for the instructor to grade a student’s comments on a peer review? I tell students that I consider peer reviews as a type of test, an open-book, OK-to-ask-me-questions-during-type-of-test, for which they receive grades based on the quality of their comments and demonstration of understanding the assignment requirements. I know where I can see the student’s peer comments, but I couldn’t figure out if I can grade that student’s comments.

Welcome to the new WritingwithMachines Blog!

It’s new. It’s shiny! It’s supported by our very own MiraCosta College (at least, they’re lending us the prestige of their wordpress url)! And there is even a little \w/M/ avatar saluting you from the corner of your browser tab!! Look at that little guy! How cute!!

To find the archive of our past blogs, please visit writingwithmachines.com . For all things new and exciting and techy and teachery…bookmark this blog

https://wordpress.miracosta.edu/writingwithmachines

WritingwithMachines in Fall 2017

The WritingwithMachines group is looking forward to the fall semester. This blog is about to become enriched once again as we begin our pedagogy-based course on teaching composition with technology, the second course offered in our Certification Sequence. This semester, our “brown-bag” discussions will move to an online forum. We’ll miss the snacks, but we’ll likely enjoy the more flexible and accessible forum Zoom affords. If you are interested in participating in exchanges that explore the intersections of writing, reading, technology, and pedagogy, we hope you will join us for one or many of these discussions.

WritingwithMachine’s Fall Certification Sequence

The Fall Certification Sequence will begin September 11th. Faculty who participate will complete 5-units over the course 10-weeks, covering topics such as:

  • the benefits and limitations of digital feedback and assessments
  • how issues of accessibility and universal design are linked to concepts of rhetoric and composition
  • how to create collaborative assignments using networked technologies

We hope many of our colleagues who completed the Spring Certification Course will continue to edify our discussions this fall, and for anyone interested in joining us now, more information about how to prepare for the Sequence will be forthcoming. In the meantime, please scroll down and explore this blog, or contact curry mitchell at cmitchell@miracosta.edu for more information.

WritingwithMachine’s Fall Meeting Schedule

The WritingwithMachines group will host 3 open-discussion meetings in the fall on 9/21, 10/19, and 11/16 from 7:00 to 8:00 pm in Zoom. Participation is FLEX eligible. Stay for the whole hour or drop in for a bit. Enjoy rich inquiries with colleagues regarding reading, writing, technology, and pedagogy. Oh, and Canvas…we’ll probably talk about Canvas too.

Contact curry (cmitchell@miracosta.edu) if you’re interested, or check back on this blog to find the url to join the discussion. Hope to see you there!!