Classroom Assessment Techniques for

 Law School Teaching





2001 Presented at the Eighth Annual Conference for Law School Teaching


Barbara Glesner Fines


UMKC School of Law






Apart from skills-focused classes such as legal writing or clinic course, most law school classes provide little formal assessment of student learning while that learning is taking place. In most law school classrooms, assessment of learning is either an informal process of observation in the day-to-day classroom teaching, or an exam after the class has ended.  While both assessment tools are valuable, neither is designed expressly for improving individual student learning.   The informal observations of classroom learning and semester-end, summative examinations, are either too little or too late to achieve these gains.

 That goal can be achieved through the use of other assessment techniques during the semester.  Frequent, timely and focused assessment is critical to improving student learning.  Frequent assessment can also result in metacognitive gains, as students develop the skills for self-assessment of learning. As awareness of learning motivates further learning, a cycle of success can increase student learning in sometimes dramatic fashion.

 A second important goal of classroom assessment of student learning is the improvement of faculty teaching.  While most faculty are aware of the need for frequent feedback to improve student learning, what faculty sometimes recognize only intuitively is that they too need frequent and timely assessment in order to improve their teaching.  B. Davis, Tools for Teaching (Jossey-Bass 1993) The following materials are designed to explore methods of obtaining that feedback on student learning throughout the semester: both for student feedback and for informing one’s own teaching.  Many of the techniques described are built upon those compiled by Thomas Angelo and K. Patricia Cross in Classroom Assessment Techniques.[1]  The techniques those authors gathered shared several characteristics: “learner-centered, teacher-directed, mutually beneficial, formative, context-specific, ongoing and firmly rooted in good practice.” [2]

 A Process for Using Classroom Assessment Techniques

 As with all effective teaching and learning, the critical first step is to clarify your goal or purpose. What do you want to discover about your student’s learning and how do you plan to use the data you gather?  Do you want to assess what students bring into a course or what they are taking from it?  What aspect of student learning do you want to learn more about: knowledge, skills or attitudes & values?  

 Bloom’s taxonomies of educational objectives[3]  is a time-tested tool for clarifying goals for assessment.  The following summary of the Cognitive Domain taxonomy may be helpful in identifying the specific objective[4]

 1.      Knowledge of terminology; specific facts; ways and means of dealing with specifics (conventions, trends and sequences, classifications and categories, criteria, methodology); universals and abstractions in a field (principles and generalizations, theories and structures):
Knowledge is (here) defined as the remembering (recalling) of appropriate, previously learned information.

o        defines; describes; enumerates; identifies; labels; lists; matches; names; reads; records; reproduces; selects; states; views.

2.      Comprehension: Grasping (understanding) the meaning of informational materials.

o        classifies; cites; converts; describes; discusses; estimates; explains; generalizes; gives examples; makes sense out of; paraphrases; restates (in own words); summarizes; traces; understands.

3.      Application: The use of previously learned information in new and concrete situations to solve problems that have single or best answers.

o        acts; administers; articulates; assesses; charts; collects; computes; constructs; contributes; controls; determines; develops; discovers; establishes; extends; implements; includes; informs; instructs; operationalizes; participates; predicts; prepares; preserves; produces; projects; provides; relates; reports; shows; solves; teaches; transfers; uses; utilizes.

4.      Analysis: The breaking down of informational materials into their component parts, examining (and trying to understand the organizational structure of) such information to develop divergent conclusions by identifying motives or causes, making inferences, and/or finding evidence to support generalizations.

o        breaks down; correlates; diagrams; differentiates; discriminates; distinguishes; focuses; illustrates; infers; limits; outlines; points out; prioritizes; recognizes; separates; subdivides.

5.      Synthesis: Creatively or divergently applying prior knowledge and skills to produce a new or original whole.

o        adapts; anticipates; categorizes; collaborates; combines; communicates; compares; compiles; composes; contrasts; creates; designs; devises; expresses; facilitates; formulates; generates; incorporates; individualizes; initiates; integrates; intervenes; models; modifies; negotiates; plans; progresses; rearranges; reconstructs; reinforces; reorganizes; revises; structures; substitutes; validates.

6.      Evaluation: Judging the value of material based on personal values/opinions, resulting in an end product, with a given purpose, without real right or wrong answers.

o        appraises; compares & contrasts; concludes; criticizes; critiques; decides; defends; interprets; judges; justifies; reframes; supports.

Of course, this listing focuses only on cognitive learning outcomes.  One may be interested in student opinions and values, or their communication skills or their assessment of their own learning.

The key here is to choose a very specific context and very specific information you want to gather about student learning.   In a civil procedure class, assessing whether students understand the minimum contacts test from International Shoe is not the kind of assessment one can undertake in a single classroom assessment.  One can, however, determine whether students can articulate the test itself or explain one factor from the test.  One can also ask students to identify the part of the test that is least clear to them.  Remember to keep assessment simple and focus on those aspects of the class that present the greatest potential for affecting teaching and learning.

Having chosen a goal, one can then design a strategy for assessment.   Choose from some of the techniques described in these materials or design an assessment that meets your particular needs and teaching style.  Consider implementation issues:  should student performance be anonymous? (Anonymity can give students greater freedom in expressing opinions and taking risks, but it reduces accountability and does not provide you with a way to gauge individual learning gains).   Do you want assessment of individual learning or will paired or small group provide you with sufficient information (or serve other learning and teaching goals)?  Will the technique be comfortable or foreign to the students?  How much introduction will be required so carry out the assessment?  The most simply assessment device benefits from informing the students of the purposes of the device.

Once you actually implement an assessment technique, be sure to follow through.  Analyze the information you have gathered.  What have you learned about the student learning? How will that knowledge affect your teaching?  How will you share what you have learned with the students.  Students will be more willing to actively engage in assessment activities and will learn more from them if you explain how the assessment results can be used to improve their own individual learning.

Assessment Techniques

1.   Improving a Tried and True Assessment Technique: Watching Student Non-verbal Cues

Every teacher watches his or her students to assess teaching and learning in the classroom.[5]  We might observe students to assess their understanding, engagement, attitudes, and adjust our pace, content or presentation accordingly.  In particular, we might observe students for understanding or lack of understanding of a particular discussion or lecture.

Obviously, there are significant limitations on assessment based on non-verbal feedback.  The student who looks the most confused may in fact have the most sophisticated understanding of the material and may simply be grappling with the cutting-edge of the material being addressed.  The student who appears hostile may simply have a stomachache.

Moreover, non-verbal symbols are highly culture, and even gender, bound.  For example, suppose the class is being presented a very controversial theory in the class.  Many students are nodding their head.  Does that mean they agree?  For most men, this is the likely explanation (“I agree.”).  For most women, however, nodding is used to encourage further conversation (“I’m listening.”)[6]  Eye contact, posture, where a student chooses to sit in a classroom all might say something about the student’s learning or reactions to your teaching... or it might not.

How might that process of watching be improved to increase the validity of non-verbal feedback as an assessment device?   By consciously planning and implementing the technique as one would any other classroom assessment technique.

Choose a question:  Is my pace through overheads (or power point slides) appropriate when conducting a lecture class?

Choose a technique: I will watch for students writing, attending, and non-verbal indications that will indicate whether I need to slow down or speed up.

Implement the technique:  Introduce it to students: “I have a tendency to move through slides fairly quickly.  I will try to watch you all to be sure I’m not going too fast.”  (With this introduction, student will more readily provide the non-verbal feedback you need to match your pace.)

Follow-through: Explain to students when you choose not to slow down (“I see some of you would like to review that overhead more; we can’t right now, but I will have copies available after class” or “Ooops, need to see that one a minute more.  Sure.”

Similar explicit attention given to reading non-verbal cues as a source of feedback can improve this technique we use almost daily, though in implicit, often unconscious ways.  Of course, one can move beyond mere observation and ask students for feedback (“Am I going to fast?”  “Did I clarify that concept ?”)

2.  Improving a Tried and True Technique:  Classroom Dialogues

Along with pure lecture, the overwhelming majority of law school classes are taught by a dialogue method.[7]  Faculty can obtain a good deal of assessment information about the student or students participating in the dialogue, though the validity of that information may depend on the student’s response to the stress of “the hot seat.”  Given the pervasive use of this teaching technique, it would seem that time spent in developing dialogue as assessment would be most productive as well as comfortable to all concerned.

One problem with using classroom dialogue as assessment is that we are sometimes unsure what it is we are assessing with any set of questions.  Often we are not truly trying to assess student learning as much as promote thought or organize learning.  However, if we carefully design questions with assessment in mind, we can gather information about the student’s knowledge, skill, attitudes or preparation.

The second problem with dialogue as assessment, is that it only assesses the learning of those students participating in the dialogue.  To gather information about the learning of the class as a whole, we need to find a way to broaden the dialogue.  Two simple variations on the traditional dialogue method can increase the number of students we can direct questions and from whom we can obtain responses:

Variation One:  Am I right?

One easy way to broaden dialogue is to simply poll the class for agreement or disagreement with a particular student’s response.  There are important reasons why one would not want to call on (or accept the volunteering) of a student to engage in a dialogue, knowing that the student’s answer would be subject, not only to your critique, but to a vote of peers.  In the competitive and often stressful law school classroom, such a technique could quickly destroy class rapport and alienate students from one another. The same effect, however, could be obtained by placing yourself in the “hot seat” – responding to a question or posing an analysis of a particular problem and then asking the students to vote – “Am I right?” Students can vote by raising hands, displaying cards or signs you have distributed ahead of time, or – if the classroom is equipped – providing electronic “votes.”  Using methods that do not require students to display their answer to others may provide more accurate assessment.  You can require participation (“Everybody has to play”) or not, depending on your goal.

Using the feedback:  If the vast majority of the class answers correctly, you can simply provide a brief explanation and then move on.  If, however, the majority of the class is incorrect, you can backtrack, address the misconception (to a more an audience whose attention has been sharpened by being “wrong”) and then move forward again.  If the class is divided, you can also provide explanation and move ahead or, for more active learning for all participants, ask students to turn to someone who gave a different answer and convince that person of the “correct” response.  The ensuing dialogue will, often as not, replicate the one-on-one dialogue you would be having with the student who did not understand.


In a professional responsibility class, students often do not distinguish carefully among the categories of withdrawal from representation – either confusing mandatory and permissive withdrawal or confusing those withdrawals for which one must show no material adverse effect on the client and those withdrawals that are justified even if such an adverse effect would result.  The instructor would prepare a hypothetical in which withdrawal was permissive but which would be adverse to the client.  After presenting the hypothetical, the instructor would propose an analysis that reflects the typical confusions and then ask “Am I right?”  After noting the student response, the instructor would then adjust the remainder of the discussion on that doctrine to reflect student understanding.

Variation Two: Dialogue with groups

Rather than asking a single student to present arguments or analysis or articulation of knowledge, ask the class as a whole, invite the class to then break into small groups or pairs and discuss their answer, and then have groups report back.  This "think-pair-share" technique is a cornerstone of cooperative learning, but can also provide an efficient method for assessing the learning of the class as a whole.  In the reporting of each team or group’s answers, the instructor can assess student learning and proceed as is appropriate to that feedback.  For example, after the first one or two groups have responded, the instructor can ask if another group has come up with something different or additional.  The safety of a group response will often encourage students to risk answering incorrectly. This is especially so if the instructor minimizes any sense of competition among groups to get the “right” answer and provides students positive reinforcement for their participation.  For some discussions, groups might be asked to write out their answer on an overhead transparency to be shown anonymously to the class.   

3.  Borrowing from the Past: The Pop Quiz

Short multiple choice or short-answer quizzes can be powerful tools for assessing and promoting student learning and improving the quality of teaching.  So long as the quizzes do not count for the final grade (or count only a minimal amount) students appreciate the clear, timely feedback these quizzes can provide.  Since most students are comfortable with quizzes, they require less introduction and meet with less student resistance than other methods might.  Quizzes can be used to assess student’s background knowledge or understanding in order to plan approaches to lessons, to establish a baseline to measure student learning, and to assess student understanding.  They can also serve a number of purposes beyond assessment, such as guiding student learning and discussion of a subject, setting up class discussion, or reviewing materials already learned.  Quizzes can, of course, be part of summative evaluation process as well.[8]

Design questions carefully.  Designing multiple choice questions is an art form in itself, of which others at the conference will be providing additional information.  True-false questions are often easier to start with and may be just as effective in assessing student knowledge, though they may not be able to assess higher-order cognitive skills as well.   Short answer questions need to be precisely written to obtain valid assessment.  Despite the difficulty in designing quizzes, the benefits are well worth the investment of time.  Students will be quite forgiving of poorly drafted questions if no grade is involved (indeed some additional learning benefit and class rapport can be gained by incorporating an “appeal” process into quizzes – nothing creates class solidarity faster than proving the professor wrong!)

Consider when to give the quiz.  Depending on the purpose for the quiz, you may be comfortable distributing the quiz as part of class preparation and using class time to review the quiz.  Quizzes can be designed for computerized administration so students can take the quiz on line and receive immediate feedback.  A reporting function on some computer assisted instruction programs would allow you access to the student score.

Use the data to improve learning.  As with all assessment devices, students appreciate knowing how they did individually, how that performance compares with their peers, and how the assessment device will be used to improve their learning.

4.    Fill in the blanks

Often the most confusion students have in mastering any particular area of law is in finding the appropriate organization or categorization for doctrines.  Assessment devices can be specifically geared toward viewing student’s “maps” of a subject.  You can ask students to sketch a flowchart of a concept or you can provide students structures for them to fill in.  To make the assessment device efficient, you should focus on content or structure but not both.  (See examples below).

To use this strategy effectively, you must ask yourself why a structural overview would be useful to the students’ learning at this point.  Do students need to learn to break down and analyze a rule? Are students losing the “big picture” in the midst of learning a doctrine or concept?  Do students need assistance in seeing relationships between ideas?  Are the students at the point in their learning that synthesis and condensation of material is critical to their ability to use the information?  Obviously, the incomplete outline or graphical map at the end of the semester will be geared more toward synthesis and overview than in-depth analysis and organization of any particular set of ideas.

Earlier in the semester, outlines and graphical maps can help students identify main ideas or see the overall organization of one topic, identify relationships between ideas and rules, or guide the students through a process of problem solving in a particular area of law.

Using the feedback.  Review the assessments for common misconceptions and areas of uncertainty.   Follow up with additional clarification.  Design problems against which students can “test” their matrix.

For example, in civil procedure II I give the following matrix to students and ask them to put the appropriate language in the appropriate boxes.






Directions:  Work in Pairs.  Below is a chart containing the four categories from the minimum contacts test.  Following the chart is a list of terms.


First, decide which terms go with which category.  (The terms may not be sufficient to support jurisdiction or even relevant.  Don’t worry about that issue yet.  Just place the terms in the categories in which they make the most sense.  If a term does not appear to relate to any category, leave it out.  If a term could appear in more than one category, put it in the category that it best fits.) Second, for each category, arrange the terms in the order that best articulates the law regarding that category.  






State of Mind





(Fair play/justice)


































 Awareness that product will enter state             Continuous

Affecting state citizens                                    Foreseeability that product will enter state

Frequent                                              Georgraphical location of forum state

Giving rise to the cause of action             Having a physical presence

Having a logical connection                     History of the type of jurisdiction asserted

Identity of the plaintiff                                    Identity of the defendant

Integral to the claim                              Intentionally directed toward the state

Large volume                                        Large percentage of business

Location of evidences and witnesses                   Nature of the cause of action (type of law)

Presence of international effects                       Presence of property

Purchases in the state                                      Purposefully placing in stream of commerce

Substantial                                           Volume, value and hazardous character



  As a follow-up activity, I ask the students to select one or two terms from each category and construct a hypothetical that could be characterized as meeting those terms and those terms only.   I then have them argue about whether personal jurisdiction would be constitutionally appropriate under those facts.

Graphics can even be used to assess attitudes.  For example, in professional responsibility class, I ask students during the first class to draw a picture or some symbols that represent a “professional.”  This exercise provides insight into and extraordinarily rich discussion of student attitudes toward their chosen career path.

5.       Stop, Ask and Listen

Perhaps the easiest way to assess student learning is simply to ask.  Many of the techniques described in educational literature are simply a variation on stopping class for a moment, asking a question, and then having student provide a short written response.  Described by Angela & Cross as “The Minute Paper” and introduced to law professors as “Free writes”[9], the technique has a number of variations depending on the information one is soliciting from the students.  To use the technique, the professor simply stops the class and asks students to respond (on an index card or half-sheet of paper) to one of several questions, such as:

"What was the most important thing you learned during this class (from this reading, from this discussion, etc.)?"
"What important question remains unanswered?"
"What was the muddiest point in ........?"
”Summarize the key points from this doctrine”?
”Paraphrase the holding from X (or the doctrine of X)?”
“Give one example of ...”

    Be sure to analyze the data obtained and report back to students.

  Sources for more information


Angelo, T. A., and Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers, 2nd ed. San Francisco: Jossey-Bass.

Davis, B. G. (1993). Tools for teaching. San Francisco: Jossey Bass.

Murray, H. G. (1991). Effective teaching behaviors in the college classroom. In J. C. Smart (ed.), Higher education: Handbook of theory and research, Vol. 7 (pp. 135-172). New York: Agathon.

[1][1]   Thomas Angelo and K. Patricia Cross, Classroom Assessment Techniques (2d Ed., Jossey-Bass 1993).

[2][2]   Id.. at 4.

[3][3]   B.S. Bloom, et. al, Ed.,  Taxonomy Of Educational Objectives: The Classification Of Educational Goals (1956).

[4][4]   The summary is provided by Professor Günter Krumme, University of Washington, Seattle, at (with permission)(last visited June 8, 2001).

[5][5]      Well, not every teacher.  I once had an elderly history teacher in junior high school who, at the beginning of class, would sit at her desk at the front of the room and talk to a pencil held in her lap.  Fifty minutes would pass without so much as a glance at us.  Needless to say, little learning took place among the 13-year-old students, no matter how motivated they were to learn.  Recently, I have been reminded of that teacher as I have attended lectures or classes conducted with power point presentations, in which the instructor is talking to the computer screen much as Mrs. M talked to her pencil.  The technology may have improved but the teaching hasn’t.

[6][6]  Deborah Tannen, You Just Don't Understand (1991).

[7][7]  Steven I. Friedland , How We Teach: A Survey Of Teaching Techniques In American Law Schools, 20  Seattle U. L. Rev. 1 (1996).

[8][8]  These materials focus on assessment techniques designed to be formative – that is, to inform and improve learning and teaching – rather than summative, to report on the end results of the teaching and le arning in a course.

[9][9]  David Dominguez Laurie Zimet Fran Ansley Charles Daye Rod Fong,  Inclusive Teaching Methods Across The Curriculum: Academic Resource And Law Teachers Tie A Knot At The AALS, 31 U.S.F. L. Rev. 875 (1997).