top of page

Handling Assessment

Assessment can be one of the most challenging aspects of managing a course for student instructors. No matter what you do, you will almost certainly be challenged by your peers on your assessment model. Therefore, it is very important that you come up with an assessment plan that you believe will be most beneficial to student learning, while also being a model that you feel you can back up and defend when the time comes.

 

One thing we have found that might be able to save you some grief is making your assessment plan as objective as possible. Unlike professor assigned grades, or even grades assigned by a CA in a course taught by a professor, grades assigned by a student instructor are taken with a grain of salt. One of your most valuable tools in the case that your grades are questioned is your content advisor. They can double check your assessment, and if you are backed up by a professor, students will be much less likely to make a fuss.

 

We have tried two different methods of assessment and found pros and cons of each. We are not suggesting that one of these models will definitely be a fit for your course, but instead just giving you two examples of possible methods to get your investigation and thought process started.

 

The assessment model we used in DSA in spring 2019 was adopted from the prior two runs of the DSA SLC. It was an extremely simple and objective model wherein if your code ran and passed all test cases (and you got it checked off in time), you got full credit on the assignment.

 

As creative as we are, our initial thought was to adopt a similar grading scheme for Advanced Algorithms in spring 2020. Unfortunately for us, we soon realized that the different nature of the course meant a different assessment plan would be needed. There was simply no way to run test cases on explanation or proof based problems. We also wanted our grading scheme to place more of an emphasis on effort than the "we don't care if you spent 12 hours if it doesn't pass our test cases; we can't give you credit" attitude of DSA.

 

Unfortunately for us, we are lazy and did not want to go to the effort of crafting a whole new assessment plan. Fortunately for us, we have taken many Olin classes with a wide variety of assessment strategies. One that we particularly liked was the assessment plan used by Sarah Spence Adams in her discrete math course. We (with her permission of course - thanks Sarah <3) completely ripped off her method to come up with the following scale for grading lab and homework problems:

 

E: Excellent (10/10)

VC: Very good, but poor communication (8/10)

VM: Very good, but small math mistake or misprint (8/10)

R: Major misunderstanding, needs revision (4/10)

RAA: Acceptable after revision (8/10)

RUA: Unacceptable after revision (4/10)

N: No submission (0/10)

 

As you can see, students are greatly rewarded for going back and revisiting problems that they initially did not perform as well on. This allowed students the flexibility to make a solid effort on a problem with the knowledge that they could go back and revisit the problem after the answers had been posted. We found this to be effective in encouraging students to make a solid initial effort without feeling pressure to spend ridiculous amounts of time flailing on a problem if they were not making progress.

 

One of the other things you may have noticed (if you are familiar with Sarah's wonderful assessment plan) is that we assigned numbers to each of the letter categories. This was an important step for us, as student instructors, to make this assessment plan as objective and transparent as possible.

bottom of page