Tuesday, May 1, 2012

Robo-Readers

It’s interesting to see technology taking such a big leap into different parts of life. It can help a lot in education, but there are just some parts of schooling that robots and high-technology machines should stay away from. For example, grading essays and papers is one thing that actual humans should have a better grasp on than a machine.

The depth of an argument or the point of a paper is definitely a key part to the writing process and the final outcome; it would be a shame if people started to lose that ability to form an actually well written, strong piece of writing. Les Perelman, a director of writing at the Massachusetts Institute of Technology, has done a lot of research on the automated reader, and it seems as if the automated reader can be beat. Even in such a way that “He tells students not to waste time workrying about whether their facts are accurate, since pretty much any fact will do as long as it is incorporated into a well-structured sentence” (Source B). Some of the key things to use when writing for the automated reader are long sentences, long paragraphs, long paper (in general), no sentences beginning with ‘or’ or ‘and,’ connectors (‘however,’ ‘moreover’), and big words. In reality, the substance of the argument has no effect on the grading at all.

This is even more shown by the company’s website itself. ETS explains the features included in the e-rater scoring engine, and it appears that the one criterion on content is “content analysis based on vocabulary measures.” That most likely means that if a person uses big words rather than small words, the automated reader will see the paper as being better argued. This is another place where Perelman comes in to actually sum it all together in one sentence, “The substance of an argument doesn’t matter, he said, as long as it looks to the computer as if it’s nicely argued.” It’s pretty clear the the ETS website is focusing on showing that the scores are based on important things like “proportion of grammar errors,” “proportion of usage errors,” proportion of mechanics errors,” and others. Some of these features are probably extremely helpful, but until the machine has the ability to effectively look at an argument, there is no way that a human should just be pushed aside.

There are some arguments that do actually hint that a human may not be the best option when grading papers, but for the most part, the arguments aren’t very strong. In Torie Bosch’s writing on the “Robo-Readers,” she brings up a point by some person named “Shermis” (which questions the credibility already because no one knows who “Shermis” is), that is actually decent. Shermis said that automated essay scoring is “nonjudgmental,” and “it can be done 24/7. If students finish an essay at 10 p.m., they get feedback at 10:01.” This argument could be stronger though, and it can be argued that the importance of looking at the actual meaning of the essay is more important than receiving the scores sooner.

Unless there are some more advancements made, machines should probably leave essay scoring in the hands of humans. Maybe a machine could focus on the grammar and spelling, but a human should be there to actually interpret the text as well. Technology is really important in every day life, but sometimes it still needs to advance for certain steps to be taken.

No comments:

Post a Comment