Why do some knowledge checks work better than others? Part one
- Zoe Goldthorpe

- Feb 26
- 6 min read
Updated: 3 days ago

Rethinking knowledge checks
Knowledge checks are a familiar feature of most digital learning courses, yet they’re often one of the hardest elements to get right. When they work well, they help learners pause, reflect, and consolidate their understanding. When they don’t, they can feel superficial, frustrating, or disconnected from the rest of the course.
Developing digital learning content takes time and careful thought. Much of that attention quite rightly goes into structuring material, explaining complex ideas clearly, and deciding what learners need at each stage. In that process, knowledge checks can receive less focus and end up serving primarily as progress markers, rather than as tools that actively support learning.
Drawing on our experience designing digital learning and our backgrounds in teaching, this two-part guide brings together practical steps you can apply straight away, alongside the thinking that informs them. In this first post, we outline three initial ideas about what tends to work well, explain why those choices matter, and focus on small, deliberate adjustments that can strengthen your knowledge checks without requiring a full rewrite of your course.
Align every knowledge check to a learning outcome
When a learner reaches a knowledge check, they should know that it is there for a reason. Questions that are not clearly linked to a learning outcome can feel arbitrary, even when the surrounding content itself has been carefully developed.
Clear alignment helps learners understand what matters and why they are being asked to engage with a particular question. It also makes review and collaboration easier, as everyone involved can see what each question is designed to assess.
In practice, misalignment usually develops gradually. Learning outcomes evolve, questions are drafted at different stages, and links between the two can become blurred. Taking time to reconnect the two can help make sure the course has a clear direction and purpose.
Put it into practice
Create a simple list of your learning outcomes and map each knowledge check against them. This doesn’t need to be complex. A short exercise in checking and mapping is often enough to highlight gaps, duplication, or questions that aren’t clearly serving a purpose. Your end result may look something like the table below:
Learning outcome | Module one | Module two | Module three |
I can define key safeguarding terminology and statutory responsibilities. | Q1, Q2 | Q1 | — |
I can recognise common signs and indicators of abuse or neglect. | Q3, Q4 | Q2 | Q1 |
I can describe appropriate reporting procedures within a school setting. | — | Q3, Q4 | Q2 |
I can apply safeguarding principles to realistic school-based scenarios. | Q5 | Q5 | Q3, Q4 |
I can explain the importance of maintaining professional boundaries and confidentiality. | Q6 | Q6 | Q5 |
I can identify the roles and responsibilities of key safeguarding leads within a school. | Q7 | Q7 | Q6 |
Questions not yet clearly aligned | Q8 | Q8 | Q7, Q8 |
Checklist for success
Do you already have clear, measurable, approved learning outcomes?
Can you explain what each question is assessing in one sentence?
If you removed a question, would it leave a gap in assessing the outcome?
Is there any learning outcome that currently has no knowledge check linked to it?
Build in an appropriate level of challenge
Questions that are too easy can undermine a course more than we realise. Adult learners quickly notice when a course isn’t asking much of them, and this affects how seriously they take the content.
Often, the level of challenge comes down to small design decisions. Options such as ‘all of the above’ or ‘none of the above’ can make answers easier to guess than intended. If a question about benefits includes three clearly negative options and one positive one, the balance gives the answer away. Similarly, if the correct option is significantly longer or more detailed than the others, it can stand out visually. Take the following question, for example:
Which of the following is a benefit of regular reading?
a. It reduces vocabulary and limits critical thinking.
b. It improves concentration, strengthens vocabulary, and supports the development of analytical and reflective thinking skills over time.
c. It makes it harder to retain information.
d. It improves physical strength and coordination.
Even without prior knowledge, the balance makes the answer relatively easy to spot. Two options are clearly negative, one is implausible, and one is significantly longer and more detailed than the rest. The learner doesn’t need to think carefully about the topic; they only need to notice the pattern.
Well-designed distractors are plausible and comparable in length and tone, so that learners need to think rather than scan for clues. At the same time, challenge requires judgement. Overly complex or ambiguous questions can frustrate learners, particularly in self-paced digital learning where there is no opportunity to ask for clarification. The difficulty should come from the thinking required, not from unnecessary complexity.
Put it into practice
Review each multiple-choice question and ask yourself whether a reasonably informed learner would need to pause and think before answering. If the answer is obvious at a glance, consider how the options could be strengthened.
Checklist for success
Are the incorrect answers genuinely plausible?
Are all options fairly similar in length and structure?
Do learners need to understand the course content to answer correctly?
Have you avoided answer patterns that make guessing easier than thinking?
3. Make questions relevant to real situations
Relevance plays a significant role in learner engagement. Questions that feel abstract or disconnected from real-world practice can make it harder for learners to see the point of what they are being asked to do. Adult learners in particular tend to value learning that feels purposeful and applicable, especially when they are taking time away from other responsibilities to complete a course.
You do not need lengthy case studies to achieve this. Often, a single sentence of context is enough to shift a question from theoretical recall to practical application. Framing questions around decisions, actions, or judgement calls helps learners connect knowledge to situations they might realistically encounter.
Say we want to assess whether a learner understands what meaningful participation of women and girls looks like in practice. We could use something like:
You are planning a community consultation about a new water access point in a displacement setting. How can you best support the meaningful participation of women and adolescent girls?
a. Invite a small number of women to attend the main community meeting with leaders
b. Organise separate, safe consultation sessions for women and adolescent girls
c. Ask male community leaders to share the views of women in their households
d. Hold one open meeting for the whole community at a time chosen by local leaders
This question still checks understanding of meaningful participation, but it anchors that knowledge in a realistic planning decision. The learner must consider safety, access, power dynamics, and representation, rather than recalling a definition. The scenario is brief, but it asks the learner to move beyond simply knowing the definition and consider how the principle would guide their actions in practice.
Put it into practice
Take a look through your knowledge checks and look for questions that relate to actions, decisions, professional judgement, or sequences of steps. These are often good candidates for a brief, realistic scenario. You do not need to replace every definition-based question, but identifying where application would strengthen understanding can add impact.
Checklist for success
Does the question reflect a situation your learners could genuinely encounter?
Does the scenario feel grounded in practice rather than invented for the course?
Is the scenario concise and focused on one clear decision?
Does the context help clarify why the principle matters?
Looking ahead: what happens after the question?
Well-designed knowledge checks do more than confirm whether a learner has selected the right answer. The sections above focus on the foundations of strong question design: aligning questions with learning outcomes, introducing an appropriate level of challenge, and grounding questions in situations that reflect real practice.
Together, these decisions help ensure that knowledge checks are purposeful, engaging, and clearly connected to what learners need to understand.
But an important part of the learning experience happens after a learner responds. What appears next — the explanation, guidance, or opportunity to reflect — can either close the interaction or deepen understanding.
In part two of this guide, we will look at how to make that next step meaningful. We will explore how effective feedback can reinforce key principles, how to use reflection in asynchronous learning, and how to design knowledge checks that support learning without expecting automated tools to assess skills they cannot realistically measure.
