Assessment support

toolbox

Evaluate and improve

What kind of information can you use to evaluate and improve the quality of your assessment?

  1. Peer review. When constructing an exam, you can ask a collegue to check your exam and provide feedback. This way you can prevent that e.g. questions will be unclear or that the test will be too long. Use the checklists for constructing questions as a checklist for yourself.  
  2. Analysis of the test results (pass/fail percentage; average grade etc.). Are the results satisfactory? As expected? Can they be explained?
  3. Analyis of the test on item level? Was the test reliable? Were the questions of good quality? 
  4. Impressions of the involved teachers/assessors (including TA's). What stood out?  
  5. Student evaluation data (SEQ) or information from a student panel.
  6. Special circumstances or complaints (e.g. somthing went wrong during the test taking moment or many students didn't finish in time)  
  7. Inspection moment: Did any particular insights emerge from the inspection moment ?

audit, evaluation, verification, business, report, document, data, review, assessment, financial, businessman, researching, analysis, research, inspection, magnifying, info, paperwork, cartoon, auditor, auditing, accounting, magnifier, analyzing, hand, gesture, communication device, font, output device, gadget, tool, logo, graphics, brand, electric blue, graphic design, Office instrument, clip art, rectangle, office supplies, display device

Tip: If you didn't make a specification table beforehand, a nice check to see whether your test was valid is to reformulate for yourself the questions into underlying learning objectives and check whether the questions cover the learning objectives and are on a right level. For instance: Question: Calculate .... based on...  Underlying learning objective: The student is able to calculate ..... given ...... 


Assessment results information =>  teaching process evaluation
The proof of the pudding is in the eating. In this case the value and quality of the teaching process to some extent can be assessed based on the exam results. 
The exam results will show whether and to what extent (most of) the students have achieved the learning objectives. It will show what students found easy or difficult or what misconceptions still exists. All the information can help to improve the teaching process for a next round. 

Evaluative questions - reflecting on the assessment cycle steps

When evaluating, you can look back on all stages of the assessment process. What are you satisfied with? Where are there opportunities for improvement?
If you worked in a teaching team, you can do this of course together. All input as mentioned before can be considered.
Looking back at all phases of the test cycle, it will become clear that they are interlinked. For example, you are not satisfied with the quality of students' papers. Looking back at the earlier phases, you might come to the realisation that your learning objectives were a bit too ambitious or that the assignment description did not properly indicate what to focus on.

  • Questions you can ask when evaluating.
    • Are the learning objectives clear and am I actually testing these learning objectives now? All of them? At the right level? If not, do I need to change something about the learning objectives or the testing?
    • Are the chosen forms of assessment the most appropriate or are others conceivable? Are there reasons to choose a different form of test? For instance because it will make it even clearer whether the students have achieved the learning objectives, or for practical reasons.
    • Am I satisfied with the weighting and conditions for the chosen test formats?
    • Did I properly apply my specification table for the written test? Were the questions at the level as indicated? Am I satisfied with the weighting?
    • Would some more support, interim feedback or practice (practice test), have led to better results? (Formative assessment activities)
    • Test construction and assembly + test analysis: Were the questions of good quality? Appropriate level? Distinctive? Was the overall quality of the test sufficient? Which questions or answer options need adjustment if reused? Was the test good in terms of length, both considering reliability and do-able within the given time.
    • Transparency: Were students well prepared for the test(s) or assignment? Was the assignment description clear? Did the practice questions give sufficient insight into the type of test questions and expectations? Was it clear to students how the grade would be determined?
    • Test-taking: Were there any problems during the administration? Was appropriate action taken if anything happened and what lessons can be learned for next time?
    • Assessment of written test: Was the answer model used adequate? Specific enough?
    • Assessment assignments: Did the assessment tool provide sufficient guidance? Were the criteria clear? Was the tool practical? What could be improved?
    • If there were multiple assessors: Were the assessors on the same page (check scores and grades given)? Can interrater reliability be improved?
    • Test analysis assignments: Did most students struggle to meet a particular criterion? Which one? What may have been the reason? How can this be avoided (For example: Clearer assignment description? Interim feedback? Criteria is not appropriate or clear?).
    • For assignments: Were there plagiarism or freerding problems? Were they adequately responded to? What could possibly be improved?
    • Did the test inspection or post-assessment reveal any points of concern?
    • Reflection: Looking at the test results (marks) and work done by the students, can I explain this? What contributed to the result (positively or negatively)? What do we keep and what can be improved?