This post originally appeared on EvidenceProf Blog.
Multiple choice testing is a popular assessment format in Evidence courses, more popular in my experience in Evidence than in other courses. Some professors use it exclusively, while others mix multiple-choice testing with essay questions on their exams. There's good reason for using multiple choice testing in Evidence courses. For one, the MBE portion of the bar exam contains multiple choice Evidence questions, so doing so as part of a final exam helps prepare students for the bar exam format. In addition, multiple choice testing has been around, and is widely accepted as a credible format to assess student knowledge. Evidence is also a heavily rule-based class that lends itself to an assessment format that requires students to identify a single correct answer. Finally, multiple choice questions allow professors to assess more topics than can be squeezed into an essay question, reducing the chances that a student performs well on an exam because he happened to know the issues covered by the essay questions.
But there can be a large gap between good multiple choice questions and bad multiple choice questions. This post is about how those of us who do use multiple choice questions can know if we are doing it in a way that makes for good assessment. The credibility of our multiple choice questions as sound assessment tools is particularly important given the high stakes testing that goes on in so many law school classrooms. When the great bulk, if not the entire portion, of a student's grade hinges on a single 3 or 4 hour exam, it is our duty to take advantage of the available tools to ensure that our exams function as credible assessment tools.