Skip to content
Rasmus BlokMar 16, 20268 min read

Secure exams are not a feature

Secure exams are not a "setting". They are a shared risk model. If you run digital assessments at scale, you’ve probably heard a familiar refrain: “Why can’t the technology just stop misconduct?”.

It’s a fair question because from the outside, “security” can look deceptively simple: lock down the environment, monitor activity, and the problem goes away.

But secure exams are not a single control. They are a system of trade-offs across law, privacy, accessibility, pedagogy, operations, and a threat landscape that changes every semester. And that is exactly why the most important first step is not choosing a tool, it is choosing (and documenting) a risk profile.

 

START WITH THE INSTITUTION'S RISK PROFILE. NOT THE EXAM TOOL

Before you decide what to enable for a specific exam, you need a baseline answer to a broader question:

What level of integrity assurance do we need across our assessment ecosystem and what trade-offs are we willing to make to achieve it?

A mature approach treats exam integrity the same way you treat other institutional risks: you define what is acceptable, what is tolerable, what requires additional controls, and what is simply out of scope for a given exam type or context.

That institutional baseline then becomes the foundation for exam-by-exam decisions, rather than re-litigating “perfect security” every time.

This matters because it’s easy to miss the forest for the trees: if the only success criterion is that each individual exam must be 100% secure, you end up optimising for absolutes that do not exist – and often introduce new problems in the process. A better goal is to build an exam environment that is fair, proportionate, auditable, and defensible, using a layered model that matches your risk profile.

THE UNCOMFORTABLE TRUTH: MANDATORY PROTECTIONS CAN CREATE EXPLOITABLE "GAPS"

 

Here is a dilemma we should name openly because you and we both have to navigate it. Many of the things that must be true for an exam to be legitimate are also the things that can be misunderstood as “loopholes”:

  • Accessibility requirements exist to ensure students with disabilities can participate on equal terms, often using assistive technologies and accommodations that should be available even in lockdown and secure exam modes.

  • Privacy and data protection place real constraints on how intrusive monitoring can be, how long data can be retained, and how transparently students must be informed.

  • Authentic assessment sometimes requires tools, resources, or workflows that cannot be fully “locked down” without undermining the learning outcomes the exam is meant to test.

These are not optional add-ons. They are often legal, ethical, and pedagogical requirements and they are essential to trust.

At the same time, we all know that where legitimate flexibility exists, misconduct attempts may try to exploit it. The answer cannot be “remove accessibility” or “ignore privacy” or “ban everything.” The answer has to be: design exams as a balanced system, where you combine controls so that no single necessary allowance becomes the only line of defence. That is the core logic of a layered approach.

WHY "PERFECT SECURITY" IS TEH WRONG TARGET

It may sound counterintuitive, but aiming for 100% technical prevention as a universal goal can lead to worse outcomes:

  • You risk pushing institutions towards overly intrusive models that are hard to justify, hard to scale, and damaging to student trust.

  • You can accidentally reduce accessibility or block legitimate accommodations creating inequity, appeals, and reputational risk.

  • You increase operational fragility: if one control fails (device, network, OS constraint), the whole integrity model collapses.

Instead, what works in practice is a model that looks more like defence-in-depth: each measure reduces risk in a different way, and together they create something that is practical, proportionate, and auditable.

A LAYERED MODEL: INTEGRITY THROUGH COMBINED CONTROLS

Think of exam integrity as three complementary layers. Not every exam needs every layer but every high-stakes context needs more than one.

When these three layers work together, you get an integrity model that is resilient: if one control is limited by privacy constraints, OS limitations, or accessibility needs, the overall model still holds.

1. Preventive controls (reduce opportunity)

These are controls that shape what is possible during the exam: secure delivery modes, restriction of clearly prohibited pathways, and reducing the easy options for misconduct.

 
2. Detective controls (increase likelihood of discovery)

These are controls that produce signals and evidence: activity logs, invigilation insights, and reviewable records that support fair follow-up when something looks wrong.

 
3. Procedural and behavioural controls (reduce motivation and ambiguity)

This is the layer people forget: clear rules, predictable consequences, invigilator presence (where relevant), transparent communication, and consistent handling. These controls strongly influence deterrence – and they cost far less than trying to make technology do everything.

 

THE REALITY OF CONSTRAINTS: TECHNOLOGY HAS BOUNDARIES BY DESIGN

One more point that often surprises stakeholders: some “gaps” are not vendor choices – they are platform and operating system boundaries created to protect users.

For example, modern operating systems include mechanisms that prevent certain applications from being captured in screen recordings to protect sensitive content. That is not a failure of exam tooling; it is a security principle of the platform itself.

This is another reason the one-dimensional mindset fails: if your integrity model depends on a single technical measure, you are vulnerable to the very realities that make modern computing secure in the first place. A layered model acknowledges those boundaries and designs around them.

SHARED OWNERSHIP: THE INSTITUTION AND THE PLATFORM EACH HAVE RESPONSIBILITIES

For our clients, current and future, we want to be very clear about the partnership model we believe in:

We can provide robust capabilities, governance options, audibility, and security-by-design. But we cannot, and should not, carry the integrity responsibility alone.

Why? Because institutions control (and must control) several critical elements:

  • The institutional risk profile: what is proportionate for each assessment type and level.

  • Policy and communication: what is permitted, what is prohibited, and what students can expect.

  • Accessibility and accommodations governance: ensuring equal access while maintaining defensibility.

  • Data protection accountability (as controller): DPIAs where needed, lawful basis, transparency, and retention decisions.

On our side, we focus on building and operating a platform that supports that governance: role-based access, logging, monitoring options, and operational resilience that can stand up to scrutiny.

This is not about pushing responsibility away. It is about recognising that integrity is a joint outcome. When institutions engage actively and we align the tools and configuration to your risk profile, you get something far more valuable than a “security feature”: you get a defensible assessment ecosystem.

A PRACTICAL WAY TO OPERATIONALISE THIS (WITHOUT BECOMING BUREAUCRATIC)

If you want a simple way to put the above into practice, here is a structure many institutions find workable:

  1. Define assessment tiers (institution-wide).
  2. Decide “default + exceptions”.
  3. Run pre-exam readiness as part of integrity.
  4. Review and improve after each exam cycle.
Sign up to our Newsletter

STAY UPDATED ON THE LATEST DEVELOPMENTS

FREQUENTLY ASKED QUESTIONS

Thinking about switching to WISEflow? Find answers to the most frequently asked questions about functionality, implementation, and why institutions choose UNIwise.

What does “secure exams are not a setting” mean?
Secure exams are not achieved by enabling a single technical feature or lockdown mode.
 
Exam security is the result of multiple controls working together across technology, policy, accessibility, privacy, and operations.
 
Treating security as a single setting
oversimplifies a complex risk environment and often creates new problems rather than solving them.
Why can’t exam technology simply prevent all misconduct?

Because no single system can eliminate misconduct without unacceptable trade‑offs.

Legal requirements, accessibility needs, privacy protections, pedagogical goals, and operating system boundaries all limit how intrusive or restrictive exam technology can be.

Effective exam integrity reduces risk through combined controls rather than attempting absolute technical prevention.

What is an institutional exam risk profile?
An exam risk profile defines the level of integrity assurance an institution requires across different assessment contexts and what trade‑offs it is willing to accept. It clarifies what risks are acceptable, tolerable, or require additional controls, and it provides a consistent baseline for decision‑making across exams.
Why should institutions define a risk profile before choosing exam tools?
Because tools should support institutional decisions, not replace them. Without a defined risk profile, institutions often re‑debate “perfect security” exam by exam. A documented risk profile allows institutions to apply controls proportionately, consistently, and defensibly across their assessment ecosystem.
What does a layered exam integrity model mean?
A layered model combines different types of controls so that no single measure is the only line of defence. If one control is limited by accessibility, privacy, or technical constraints, other layers continue to support integrity. This approach is more resilient, auditable, and realistic than relying on one mechanism.
What are the main layers in a secure exam model?

A practical exam integrity model typically includes:

  • Preventive controls, which reduce opportunities for misconduct
  • Detective controls, which increase the likelihood of identifying irregularities
  • Procedural and behavioural controls, which reduce motivation, ambiguity, and disputes

Together, these layers create integrity that holds up under real‑world conditions.

Why is “perfect security” the wrong goal for exams?
Aiming for 100% technical prevention often leads to overly intrusive monitoring, reduced accessibility, and operational fragility. When a single control fails, the entire model collapses. A proportional, layered approach delivers better outcomes by balancing fairness, trust, and defensibility.
Do accessibility and privacy requirements weaken exam security? No. Accessibility and privacy are legal, ethical, and pedagogical requirements—not optional compromises. While they introduce necessary flexibility, they do not undermine integrity when exams are designed as balanced systems. Problems arise only when institutions rely on a single control instead of multiple reinforcing measures.
iption i
Who is responsible for exam integrity: the institution or the platform?
Exam integrity is a shared responsibility. Platforms can provide secure delivery options, logging, auditability, and governance features. Institutions remain responsible for defining risk profiles, setting policies, managing accommodations, communicating expectations, and fulfilling data protection obligations. Integrity emerges from alignment between both sides.
How can institutions operationalise exam integrity without excessive bureaucracy?

Many institutions succeed by:

  • Defining assessment tiers (e.g. low, medium, high stakes)
  • Establishing default controls with managed exceptions
  • Treating pre‑exam readiness as part of integrity
  • Reviewing and refining controls after each exam cycle

This approach supports consistency while remaining flexible and scalable.

RELATED ARTICLES