Secure exams are not a "setting". They are a shared risk model. If you run digital assessments at scale, you’ve probably heard a familiar refrain: “Why can’t the technology just stop misconduct?”.
It’s a fair question because from the outside, “security” can look deceptively simple: lock down the environment, monitor activity, and the problem goes away.
But secure exams are not a single control. They are a system of trade-offs across law, privacy, accessibility, pedagogy, operations, and a threat landscape that changes every semester. And that is exactly why the most important first step is not choosing a tool, it is choosing (and documenting) a risk profile.
START WITH THE INSTITUTION'S RISK PROFILE. NOT THE EXAM TOOL
Before you decide what to enable for a specific exam, you need a baseline answer to a broader question:
What level of integrity assurance do we need across our assessment ecosystem and what trade-offs are we willing to make to achieve it?
A mature approach treats exam integrity the same way you treat other institutional risks: you define what is acceptable, what is tolerable, what requires additional controls, and what is simply out of scope for a given exam type or context.
That institutional baseline then becomes the foundation for exam-by-exam decisions, rather than re-litigating “perfect security” every time.
This matters because it’s easy to miss the forest for the trees: if the only success criterion is that each individual exam must be 100% secure, you end up optimising for absolutes that do not exist – and often introduce new problems in the process. A better goal is to build an exam environment that is fair, proportionate, auditable, and defensible, using a layered model that matches your risk profile.
THE UNCOMFORTABLE TRUTH: MANDATORY PROTECTIONS CAN CREATE EXPLOITABLE "GAPS"
Here is a dilemma we should name openly because you and we both have to navigate it. Many of the things that must be true for an exam to be legitimate are also the things that can be misunderstood as “loopholes”:
-
Accessibility requirements exist to ensure students with disabilities can participate on equal terms, often using assistive technologies and accommodations that should be available even in lockdown and secure exam modes.
-
Privacy and data protection place real constraints on how intrusive monitoring can be, how long data can be retained, and how transparently students must be informed.
-
Authentic assessment sometimes requires tools, resources, or workflows that cannot be fully “locked down” without undermining the learning outcomes the exam is meant to test.
These are not optional add-ons. They are often legal, ethical, and pedagogical requirements and they are essential to trust.
At the same time, we all know that where legitimate flexibility exists, misconduct attempts may try to exploit it. The answer cannot be “remove accessibility” or “ignore privacy” or “ban everything.” The answer has to be: design exams as a balanced system, where you combine controls so that no single necessary allowance becomes the only line of defence. That is the core logic of a layered approach.
WHY "PERFECT SECURITY" IS TEH WRONG TARGET
It may sound counterintuitive, but aiming for 100% technical prevention as a universal goal can lead to worse outcomes:
-
You risk pushing institutions towards overly intrusive models that are hard to justify, hard to scale, and damaging to student trust.
-
You can accidentally reduce accessibility or block legitimate accommodations creating inequity, appeals, and reputational risk.
-
You increase operational fragility: if one control fails (device, network, OS constraint), the whole integrity model collapses.
Instead, what works in practice is a model that looks more like defence-in-depth: each measure reduces risk in a different way, and together they create something that is practical, proportionate, and auditable.
A LAYERED MODEL: INTEGRITY THROUGH COMBINED CONTROLS
Think of exam integrity as three complementary layers. Not every exam needs every layer but every high-stakes context needs more than one.
When these three layers work together, you get an integrity model that is resilient: if one control is limited by privacy constraints, OS limitations, or accessibility needs, the overall model still holds.
These are controls that shape what is possible during the exam: secure delivery modes, restriction of clearly prohibited pathways, and reducing the easy options for misconduct.
These are controls that produce signals and evidence: activity logs, invigilation insights, and reviewable records that support fair follow-up when something looks wrong.
This is the layer people forget: clear rules, predictable consequences, invigilator presence (where relevant), transparent communication, and consistent handling. These controls strongly influence deterrence – and they cost far less than trying to make technology do everything.
THE REALITY OF CONSTRAINTS: TECHNOLOGY HAS BOUNDARIES BY DESIGN
One more point that often surprises stakeholders: some “gaps” are not vendor choices – they are platform and operating system boundaries created to protect users.
For example, modern operating systems include mechanisms that prevent certain applications from being captured in screen recordings to protect sensitive content. That is not a failure of exam tooling; it is a security principle of the platform itself.
This is another reason the one-dimensional mindset fails: if your integrity model depends on a single technical measure, you are vulnerable to the very realities that make modern computing secure in the first place. A layered model acknowledges those boundaries and designs around them.
SHARED OWNERSHIP: THE INSTITUTION AND THE PLATFORM EACH HAVE RESPONSIBILITIES
For our clients, current and future, we want to be very clear about the partnership model we believe in:
We can provide robust capabilities, governance options, audibility, and security-by-design. But we cannot, and should not, carry the integrity responsibility alone.
Why? Because institutions control (and must control) several critical elements:
-
The institutional risk profile: what is proportionate for each assessment type and level.
-
Policy and communication: what is permitted, what is prohibited, and what students can expect.
-
Accessibility and accommodations governance: ensuring equal access while maintaining defensibility.
-
Data protection accountability (as controller): DPIAs where needed, lawful basis, transparency, and retention decisions.
On our side, we focus on building and operating a platform that supports that governance: role-based access, logging, monitoring options, and operational resilience that can stand up to scrutiny.
This is not about pushing responsibility away. It is about recognising that integrity is a joint outcome. When institutions engage actively and we align the tools and configuration to your risk profile, you get something far more valuable than a “security feature”: you get a defensible assessment ecosystem.
A PRACTICAL WAY TO OPERATIONALISE THIS (WITHOUT BECOMING BUREAUCRATIC)
If you want a simple way to put the above into practice, here is a structure many institutions find workable:
- Define assessment tiers (institution-wide).
- Decide “default + exceptions”.
- Run pre-exam readiness as part of integrity.
- Review and improve after each exam cycle.
STAY UPDATED ON THE LATEST DEVELOPMENTS
FREQUENTLY ASKED QUESTIONS
Thinking about switching to WISEflow? Find answers to the most frequently asked questions about functionality, implementation, and why institutions choose UNIwise.
Because no single system can eliminate misconduct without unacceptable trade‑offs.
Legal requirements, accessibility needs, privacy protections, pedagogical goals, and operating system boundaries all limit how intrusive or restrictive exam technology can be.
Effective exam integrity reduces risk through combined controls rather than attempting absolute technical prevention.
A practical exam integrity model typically includes:
- Preventive controls, which reduce opportunities for misconduct
- Detective controls, which increase the likelihood of identifying irregularities
- Procedural and behavioural controls, which reduce motivation, ambiguity, and disputes
Together, these layers create integrity that holds up under real‑world conditions.
iption i
Many institutions succeed by:
- Defining assessment tiers (e.g. low, medium, high stakes)
- Establishing default controls with managed exceptions
- Treating pre‑exam readiness as part of integrity
- Reviewing and refining controls after each exam cycle
This approach supports consistency while remaining flexible and scalable.