Monday, April 11, 2011

A Formula for Motivating Online Discussion

I chose the article, “Designing Grading Policy to Motivate Student Participation in Online
Discussion Forum,” for my last editorial entry because as a Graduate Teaching Assistant I spend many hours each week grading online discussion posts, and lack of student participation is a problem that I have noticed.  In this journal article, Zuopeng Zhang investigates how teachers should design grading policies for online forums to guarantee maximum student effort.  Zhang analyzes research based on scientific studies of student participation in online discussion forums to provide mathematical formulas, which predict the student behavior for different grading policies.    

The data uncovered that students will respond to online discussion prompts that require the least amount of time to satisfactorily answer the question and receive the most points possible.  Basically, if there are too many posts required in a given semester, the quality of students; post decline. Therefore, if a teacher wants to encourage quality student work they should design a grading system that awards more point to students who chose to answer prompts that take more time to answer and involve higher-level thinking.  However, the study also reveals that because not all students have a maximum amount of time to answer the prompts that require the highest-level of thinking, some students will instead answer many low-level thinking questions to accumulate the same amount of points as answering one more difficult prompt.  Ultimately, Zhang suggests that teachers should establish a minimum amount of posts required that takes into account that some students will have less time to participate; teachers should give more points to posts that require higher-level thinking if they expect fewer, but quality posts; if a teacher wants to encourage quantity instead of quality, they should award more points to students who answer the lower-level thinking prompts.

The findings were fascinating; I encourage anyone with a math background to explore the complicated formulas. I will attach the article to this post.  What I like the best about this article is that it discusses a model for discussion post grading that I have not been exposed to in my short career as a Teaching Assistant.  In the blended grading model designed to encourage quality and quantity of posts, students are given many prompts to answer, with the harder prompts worth a maximum of fifteen points, medium-level prompts worth no more than ten points, and easiest prompts worth at most five points.  The posts are then graded on this scale and points are accumulated during the semester to reach an established number for corresponding letter grades.

With this grading system, working students who do not have extended blocks of time to devote to online posting can still earn good grades by answering many easier prompts during the week.  I also liked that the points assigned to the prompts were not guaranteed, which encourages quality.  If a student answers a harder question, they need to do a good job to earn all of the points. 

Unfortunately, this article offered no research on encouraging responses to other students’ posts to encourage a conversation.  Because most of the online discussion boards that I have graded feature a theoretical discussion between students as the main object of the exercise, this article does not give me tips for motivating this behavior.  However, I could still apply these findings to my grading scales for initial posts, and develop a different system for calculating points for replies to these posts.          

Zhang, Zuopeng. “Designing Grading Policy to Motivate Student Participation in Online
Discussion Forum.” International Journal of Innovation and Learning 9.1 (2010): 1-20. Print.







Friday, April 8, 2011

A Terrible Grading Policy

            In these editorials, I try to look at both side of the issue, but when I read Paula Wasley’s article “A New Way to Grade: Students Grade Students” in The Chronicle of Higher, I have to admit that I did not see any positive aspects to the grading system described.  In the controversial Texas Tech freshman composition grading system, students’ writing is not graded by their classroom instructor but by a “document instructor,” who never meets the students.  The document instructor, actually a graduate-assistant will grade the paper sent to them by e-mail using a software program that finds errors.  Once graded by one instructor, the paper will be sent through the grading program to another graduate student document instructor to be graded again.  If the grades are more than eight points apart, the paper will be graded by a third grader. Also in this system, freshmen composition students spend less time in the classroom and more time writing. 
Proponents for the computerized system say that it is good because it establishes set standards for “good” writing, makes grading impartial, and saves the university money while compiling invaluable statistical data about how freshmen composition students are progressing.   Opponents of the grading system argue that teachers lose all of their power and influence over the class by not grading their papers and are upset that grades, even an 89.99 percent, cannot be rounded up to the next letter grade.      

My first objection with this program is that because classroom instructors do not grade their students’ papers they lose a valuable medium to communicate with students.  I understand a grade as a way for the instructor to communicate to the students whether or not they are meeting the course goals and if they understand the course material.  If a teacher is not allowed to assess their own students’ papers, then how can they determine if the students are learning what they are teaching? The students should be upset that they are denied valuable instructor feedback. The document instructor does not know what the classroom instructor has told the students; therefore, their generalized comments left on the digital papers will be completely out of context in relation to the classroom instruction. 

The next flaw that I see with this program is that it relies heavily on a computer system to assess “standardized” good writing.  Texas Tech complained that there were many definitions of good writing in their composition program before using the computerized document instructor method; however, the logic that there is one single form of good writing is flawed because what counts as good writing depends completely on the context of the discourse community and the subject that the writer is discussion.  Because the computer program only counts punctuation and form, it completely leaves out the content of the essays.  It seems that the administration thinks the overworked graders will factor “content” into their assessment, but how can they if they do not know the context of the assignment that the teacher gave them.  Even with a standard set of assignments, each teacher will deliver his or her expectations differently.  In this system, writing is viewed like a math problem with an absolute right answer. 

My final objection to this grading system is that it takes the power away from the classroom instructor, and students away from the classroom-with a reduced face-to-face time.  For example, one composition classroom instructor, Lindsay Hutton said, “People think you're just there as this kind of middleman, which you are. Once I started to feel that way, it became difficult for me to really put much into it" (Wasley,  n.p.).

How can a graduate-student grading hundreds of papers a week with a computer program really take the time to account for craft or style?  I do not think they can. I see how this system saves Texas Tech money, but I believe that the students miss out on quality first year composition instruction.    

Wasley, Paula.  “A New Way to Grade: Students Grade Students.” The Chronicle of Higher
            Education 52.27 (2006): n.p. Gale. Web. 4 March 2011.

Sunday, March 20, 2011

How to Avoid Bludgeoning Students

How to Avoid Bludgeoning Students
In The Chronicle of Higher Education article that I chose for this week, David Brooks humorously reflects on his six years as a Teaching Assistant and how he refined his ability to grade papers.  He explains how when he was new to grading papers he filled the margins with comments and corrections in an attempt to assert his authority over his students and prevent them from questioning the grades he gave them.  However, he eventually learned to use his comments to help his students improve their writing and work through their ideas. 
I definitely recommend this article to everyone in our class who is a Teaching Assistant or new to grading papers.  In fact, I am attaching it to this post in a Word document so it will be easy to access.  Brooks’ writing is not overly academic, though he clearly has an authoritative voice.   
 This article struck me immediately because it’s title, “Wielding the Red Pen,” reminded me of our in-class discussion about how blunt to be with our comments on papers.  I liked Brooks’ solution to this dilemma because he suggests that teachers should actually withhold full disclosure of a student’s mistakes.  He says that he will write comments that challenge students to investigate their mistakes:  “In the margins of a test or paper, I pose questions that encourage students to decipher their errors, rather than bludgeon them with my recognition of those errors. I favor a few comments at the end of a paper that assess the work as a whole” (3).
I completely agree that teachers’ comments should challenge a student to rethink their papers and ideas, instead of simply telling them when they are wrong.  Brooks said that his probing, indirect comments resulted in his students meeting with him to discuss how they could improve their papers.  In this light it, seems that comments can also serve to extend the learning experience of the paper itself.  Grading papers is certainly an art and I realize that I have much more to learn before I am an effective and efficient grader; however, Brooks offers a nice starting point for new teachers.  Do not bludgeon students with comments; instead encourage them to investigate their missteps.      

Brooks, David. “Wielding the Red Pen.” The Chronicle of Higher Education 57.24 (2011).
   Educator's Reference Complete. Web. 17 Mar. 2011.

Assessing Teachers?

This week I decided to write about effective teacher assessment, instead of effective student assessment.  In Alexis Wiggins’ article, “The Courage to Seek Authentic Feedback,” she asserts that student surveys are vital to improving her teaching methods and determining students’ learning needs.  The end of year surveys of her high-school English class provided such useful insights that she even implemented a mid-term class survey.  She cites an example where her mid-term survey revealed that most students wanted to study more grammar, and therefore dispelled the cries of a few students who complained about the grammar lessons.   After years of conducting pen and paper student surveys of her teaching, Wiggins took a faculty paper survey and realized that she was afraid the administration would recognize her handwriting.  Wiggins now claims that her new online surveys allow student to be more honest than they were before and discuss touch issues such as grading, favoritism, course materials. 
I find Wiggins’ story of survey success overly optimistic.  She says that no student has ever used the online surveys to personally attack her.  She also seems to imply that the surveys allow her to resolve all pedagogy issues. 
I believe that teacher designed, online mid-term surveys in a college course could offer the professor a way to monitor the atmosphere and progress of the class.  However, it seems like designing an anonymous online survey could give the students a place to voice personal attacks against the teacher.  It seems that the success of the survey would depend on the temperament of the class in general.  It could not be a mandatory survey and still stay anonymous. 
I personally think that periodic anonymous, one-minute papers would involve less work and provide a more regular barometer for student lesson comprehension.  Though, according to Wiggins, the written format would mean that students might be afraid to really voice their opinions.   
I will reserve my final verdict on the effectiveness of online teacher surveys until I have the opportunity to test them in a class for myself.
Wiggins, Alexis. “The Courage to Seek Authentic Feedback.” Education Digest 76.7 (2011): 19-21.  Educator's Reference Complete. Web. 1 Mar. 2011.

Monday, March 7, 2011

Gambling for Grades

Gambling for Grades
This week, I listened to a National Public Radio broadcast that explores issues surrounding, Ultrinsic, an online website that allows students to wager on their grades and earn.  A student’s profit is based on a formula that considers their past performance and expected performance, and students are only rewarded when they exceed expectations.
The founders of the company claim that by law this is not gambling because grades are not based on chance (the legal definition of gambling involves a game of chance) they are earned with skill.
The founders of Ultrinsic claim that they are not currently focused on profiting from the website but only want to spread the idea of “incentivized” grades.  They claim to be providing a service to students by giving them a monetary incentive to improve their grades.  This “service” was available to students at thirty-six colleges at the time of broadcast; UCF is not one of the schools on the Ultrinisic website . 
My initial reaction to this was disgust, and my feelings remained the same, even after I listened to the rest of the broadcast. 
I truly feel that this company is taking advantage of an emotionally and monetarily unstable population.  I can understand why they wanted to go on NPR and encourage a positive image to their “incentivized” grade program because they want to spread the myth that they are helping students perform better.
If a student needs the possibility of losing money to motivate them to study, then it does not seem that they have the proper motivation needed to succeed in college.  Jeremy Gelbart, one of the founders of Ultrinisic said: “We know this is an ulterior motive, but we think that when students are - when they would have a choice between going ahead and partying versus going to study for the books, they're going to choose study for the books because they're going to get the extra dollar” (14:30).
Also, this “incentive program” offers the biggest monetary payouts to students who have lower grades and improve; so, their customers may be in danger of losing scholarships, and if they fail to improve they will lose money to Ultrinisic.
I cringed when Gelbart, justified taking money from college students, who are most likely in debt, by saying that this was their fee for providing grade motivation services.
Am I closed-minded to reject this idea completely?  I am curious how everyone else feels about this.  
Here is a link to their website:  http://www.ultrinsic.com/about.html
“New Website Lets Students Bet on Grades.” Narr. Neal Conan. Talk of the Nation. Natl. Public Radio.
                2010. NPR Internet Archive. Web. 17 Feb. 2011.  

Tuesday, March 1, 2011

Do Students Benefit More from Strict or Lenient Grades?

            I chose to review this article because it tries to answer a fundamental grading question: if grades are supposed to motivate students, are they more motivated by lower or higher grades?  The article describes past studies illustrating that a strict grading scale motivates students to put more effort into their coursework and learned more.  However, it also points to studies where the opposite result was found; students pushed to work harder by a stricter grading scale were stressed emotionally and mentally and did not learn as much due to the stress they encountered.  Additionally, it was found that while some individual students may work harder when exposed to a stricter grading scale, others may give up when faced with tougher standards, or be completely neutral to the situation (Elikai and Schuhmann 679).  In general, past studies supported the belief that higher standards result in most students earning higher grades (682).  

            I found it interesting that the researchers noted that logically people will work harder if incentives are great enough (higher grades).  Strangely, this did not hold true for classes where students were not required to take the course for their major.  Therefore, for their study they chose an accounting class required for the major; if students dropped the course they would also need to change majors.    

            The results of the study showed that students evaluated with a stricter grading scale performed better and received higher grades.  Contrary to previous studies, the researchers found that lower-achieving students earned higher grades on the strict scale.  Also, fewer students dropped the class possibly because they were more invested in the coursework (690).  On another note, the different grading standards did not affect the teacher’s performance reviews, which were almost the same for all classes.

            When I began reading this article, I thought that lower-achieving students would surely be discouraged by the higher standards.  However, it does make sense that students who do not normally go above and beyond would feel pressed to perform in a required course.  It seems that the higher standards inspired a make-or-break mentality in students who might otherwise be lazy.  I was not surprised that the study confirmed that normally high-achieving students will work hard in any environment.

            To me, an aspiring literature teacher, this study implies that a stricter grading scale should be used in major courses to inspire students to work harder.  However, in a survey course a stricter grading scale might result in low-achieving students giving up because they see a good grade as unattainable.  Also, in a survey course students would be more likely to drop a course with higher standards.  I believe concern that students will drop a course with higher standards can be resolved if expectations of superior performance are clear from the first day.  This gives students who are not required to take the class and do not want to perform the chance to drop the course.  Again, this would not work if a student must take the course.  Basically, I would reserve a stricter grading scale for the environment where it will have a positive effect, upper-division major courses.        
 Work Cited
Elikai, Fara, and Peter W. Schuhmann. “An Examination of the Impact of Grading Policies on
            Students’ Achievement.” Issues in Accounting Education 25.4 (2010): 677-693.
            Professional Development Collection. Web. 2 Feb. 2011.

Tuesday, February 15, 2011

What about When the ABCs Fail?

image from: http://dconrad3.wordpress.com/
         Most of the articles I have encountered in my search to improve my knowledge of grading systems have stressed why traditional grades do not work.  So, I looked for an article that suggested a solution and found “A Simple Alternative to Grading” by Glenda Potts. 
        She said she was inspired to research and write the article because she felt that she put more effort into grading her student’s composition papers than they put into writing them.  Potts notes that in recent years educators have looked for other ways to assess students because the letter grades merely rank students instead of fostering learning. 
      Potts says she switched to contract grading—a system where each assignment is accompanied by a description of behaviors and tasks and the corresponding grade for each level of effort.  The idea behind contract grading is that instead of trying to rank quality of writing, the contract will inspire students to perform tasks that will result in learning (31). 
            I really liked this article and the contract system that it suggests.  I was at first skeptical, thinking that if a student turns in ninety percent of assignments they receive an A, even if they are not quality work.  However, what I found useful was that each assignment has minimum criterion; this means that if each student completes every assignment at the satisfactory level they will earn a C and not an A. 
           I also liked that assignments are returned to the students marked “Accepted,” for work that meets standards; “Revise,” for work that needs improved content or clarity; and “Edit,” for work with grammatical or formatting issues.  Finally, students who want an A or a B are required to complete an extra paper or project to separate themselves from the average students.
            I was still a little skeptical about this method, until I read the note that said that the National Council of Teachers of English endorses the contract system over traditional systems.  Also, Potts revealed that she recorded traditional grades for assignments and found that contract grades were overwhelmingly the same as the traditional grades.  The students who would have earned Bs but opted out of the additional assignment and earned Cs were the only exceptions to the accurate grades. 
        Finally, what sold this method to me was the fact that Potts said she spent less time grading and more time helping her students learn and improve their writing skills.
Work Cited:
Potts, Glenda. “A Simple Alternative to Grading.” Inquiry 15.1 (2010): 29-42.
          ERIC. Web. 10 Feb. 2011.