WHAT “GOOD” ASSESSMENT LOOKS LIKE IN 2026
Across Europe, sector work since the pandemic has converged on a few truths:
-
Fairness and consistency are nonnegotiable
Research on marking reliability shows why calibration, moderation and multimarker workflows matter - especially where qualitative judgement is required. Clear criteria, robust rubrics, and welldesigned processes reduce variation and bias, improving reliability and trust. -
Feedback must fuel learning, not just control
A large body of evidence (Nicol & MacfarlaneDick; Boud & Molloy; Hattie & Timperley) shows that timely, criterialinked feedback, dialogic processes, and student assessment literacy are among the strongest drivers of achievement and selfregulated learning. -
Assessment must be inclusive and futureready
Students increasingly evidence learning through multiple media - text, diagrams and drawings, code, data notebooks, simulations, video, and audio. Platforms and processes need to handle this diversity without forcing artificial constraints. European frameworks like DigCompEdu and EUA’s DIGIHE work highlight assessment capability, digital skills, and strategic maturity as success factors. -
Respect the assessor’s craft and time
Assessors need flexible workflows that fit their practice - itembyitem or candidatebycandidate marking, rubric scaffolding, collaboration with comarkers, and the ability to work offline when that’s more efficient or necessary. These choices increase consistency and reduce cognitive load, especially in large cohorts. -
Quality assurance must be builtin, not boltedon
From clear audit trails and moderation logs to alignment with the ESG and institutional QA processes, platforms need to make it easy to evidence fairness and consistency at scale.
TRANSLATING PRINCIOLES INTO PLATFORM REQUIREMENTS
If we start from these truths, a modern digital exams platform should:
-
Enable fair, consistent marking with multimarker models
Rolebased permissions, rubric aggregation across assessors, configurable grade rules, and transparent resolution of differences. -
Support moderation and calibration
Sampling, sidebyside comparison, comment sharing among coassessors, and recorded decisions build a defensible, repeatable process. -
Make feedback meaningful and layered
Inline annotations, reusable comment banks linked to rubric criteria, cohort feedback, and - where appropriate - structured grade justifications that explain how criteria were applied. -
Accommodate multimodal evidence
A unified marking experience should handle essays, MCQs, video/audio, code files, drawings, scanned paper, and interactive elements - and integrate disciplinespecific tools (e.g., mathematics, engineering, computing) without breaking the assessor’s flow. -
Fit the way assessors work
Choice of marking orientation (candidatebycandidate or itembyitem), offline marking options for lowconnectivity contexts or personal workflows, and clear progress views that answer “what do I need to do, and where did I leave off?” -
Blend paper and digital where needed
Institutions should be able to run penandpaper sittings where that is pedagogically or operationally appropriate, while still centralising marking, moderation and feedback in the same digital environment. -
Build in integrity and privacy
Academicintegrity services available inflow (not in a separate system), with robust data controls and GDPRready configurations. -
Provide analytics for quality enhancement
Cohort and itemlevel insights to support secondorder QA - identifying outliers, checking consistency, and informing programmelevel improvement.
WHY WISEflow ALIGNS WITH THESE NEEDS
WISEflow from UNIwise has been designed around exactly these principles, supporting institutions that need trustworthy, scalable assessment across formats and contexts:
-
Multiassessor workflows with rubric aggregation
Multiple assessors can each score against shared criteria, with configured aggregation and gradescaling to derive defensible final outcomes - ideal where different specialisms contribute to a single judgement. -
Realtime collaboration and moderation
Coassessors can share comments and annotations directly inside the marking tool to discuss evidence and calibrate during the process - not just after the fact. -
Feedback designed for learning
Rubriclinked comments, layered feedback (intext, to the student, and to the cohort), and gradejustification features help students understand performance against outcomes and act on it. -
One marking workspace for many media
The platform supports essays, MCQs, video, audio, and disciplinespecific artefacts, with a bestofbreed integration approach (e.g., mathematics, engineering and coding tools) so subject teams can use the right instruments while keeping a unified assessor experience. -
Offline marking flexibility
Assessors can work offline (including with rubrics) when context demands - providing genuine flexibility that adapts to the assessor, not the other way around. -
Papertodigital bridging when you need it
The Paper Submission module allows institutions to retain penandpaper where appropriate while still getting the benefits of digital marking, moderation and feedback - crucial for inclusive provision across disciplines and markets. -
Integrity and privacy inflow
Originality and integrity signals are available within the marking interface, reducing contextswitching and keeping decisionmaking in one place, with institutional control over data. -
A roadmap for consistency at scale
We are developing AIassisted grade justification and feedback with humanintheloop controls, comprehensive logging and quality analytics to support bias checks and consistency - augmenting, not replacing, academic judgement.
In short, WISEflow equips institutions and assessors to deliver fair, consistent, and learningoriented assessment across media and modes - and to evidence it against European quality expectations (ESG), digital competence frameworks (DigCompEdu), and evolving DELT practice.
A PRAGMATIC PATH FORWARD
-
Start with principles
Use Jisc’s seven principles and recent trends work as a quick selfcheck: are your assessments understandable, equitable, feedbackrich, workloadaware, and employabilityaligned -
Operationalise fairness
Embed multimarker workflows, set clear moderation policies, and use rubrics with shared exemplars. Back this with analytics to monitor variation over time. -
Design for learning
Treat feedback as a process, not a product: plan opportunities for dialogue, selfassessment and feedforward. -
Respect context
Provide offline options and multimodal submissions so assessment fits the discipline and the assessor’s practice - not the other way round.
ABOUT WISEflow
FURTHER READING - REFERENES
-
ENQA (European Association for Quality Assurance in Higher Education). ESG - Standards and guidelines for quality assurance in the EHEA (2015, current; 2027 draft in consultation). https://www.enqa.eu/esg-standards-and-guidelines-for-quality-assurance-in-the-european-higher-education-area/ and EQAR knowledge base: https://www.eqar.eu/kb/esg/
-
EUA DIGIHE (European University Association). The future of digitally enhanced learning and teaching in European higher education institutions (Final report, 2023). https://www.eua.eu/images/pdf/digi-he_final_report.pdf
-
Jisc. Principles of good assessment and feedback (guide, updated with 2024/25 trends). https://www.jisc.ac.uk/guides/principles-of-good-assessment-and-feedback and Trends in assessment in higher education: considerations for policy and practice (2025). https://www.jisc.ac.uk/reports/trends-in-assessment-in-higher-education-considerations-for-policy-and-practice
-
Nicol, D. J., & MacfarlaneDick, D. (2006). Formative assessment and selfregulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. (open versions) https://pureportal.strath.ac.uk/en/publications/formative-assessment-and-self-regulated-learning-a-model-and-seve and https://www.researchgate.net/publication/228621906_
-
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. (accepted version) https://opus.lib.uts.edu.au/bitstream/10453/26940/4/Rethinking%20Models%20Of%20Feedback%20For%20Learning.pdf
-
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. (publisher PDF) https://journals.sagepub.com/doi/pdf/10.3102/003465430298487
-
European Commission - JRC. DigCompEdu: European framework for the digital competence of educators (2017; overview page). https://joint-research-centre.ec.europa.eu/digcompedu_en and publication record: https://publications.jrc.ec.europa.eu/repository/handle/JRC107466
-
Ofqual / NFER. A review of literature on marking reliability research (2013) and Ofqual reliability research collection. https://assets.publishing.service.gov.uk/media/5a81cc0e40f0b62302699360/0613_JoTisi_et_al-nfer-a-review-of-literature-on-marking-reliability.pdf and https://www.gov.uk/government/publications/ofquals-reliability-research
-
AQU Catalunya (TESLA project). Framework for the quality assurance of eassessment (2019). https://www.aqu.cat/doc/doc_79406656_1.pdf
-
EUA. Curriculum and assessment - Thematic peer group report (2022). https://www.eua.eu/resources/publications/1009:learning-and-teaching-paper-no-16-curriculum-and-assessment.html
STAY UPDATED ON THE LATEST DEVELOPMENTS
FREQUENTLY ASKED QUESTIONS
Because assessment is a social contract: students expect fairness, consistency, transparency, and meaningful feedback. European frameworks like the ESG emphasise that reliable, student‑centred assessment processes are essential for quality assurance and academic trust.
Fairness requires anonymity options, double‑blind marking, calibrated moderation, multiple assessor inputs, and clear rubrics. These elements reduce bias, improve marking reliability, and help institutions evidence consistency across programmes and cohorts.
Research shows that timely, criteria‑linked, dialogic feedback helps students understand their progress, build self‑regulation skills, and close learning gaps. Effective digital platforms must support layered feedback, reusable comments, and transparent grade justifications.
Today’s learners demonstrate knowledge through text, diagrams, code, simulations, video, and audio. Platforms must support diverse media, accessible design (WCAG 2.2), and Universal Design for Learning (UDL) principles so all students can succeed without artificial constraints.
Assessors need flexible, intuitive workflows: candidate‑by‑candidate or item‑by‑item marking, rubric scaffolding, co‑marker collaboration, offline marking options, and clear progress views. These reduce cognitive load and help maintain quality at scale.
WISEflow embeds multi‑assessor workflows, rubric aggregation, real‑time moderation, multimodal marking, offline flexibility, in‑flow integrity checks, and paper‑to‑digital support. Its roadmap includes AI‑assisted feedback and grade‑justification tools — always with human oversight and institutional control.