Skip to content
Rasmus BlokFeb 24, 20269 min read

AI, assesment & trust in higher education

Reflections on the Norwegian Expert Committee’s preliminary report and what it means for the sector.

In December 2025, the Norwegian expert committee on artificial intelligence in higher education published its preliminary assessments on the impact of AI with particular emphasis on assessment, grading and certification of learning outcomes. The report is not a final policy document, but it is nonetheless a strong signal to the sector: the foundations of assessment in higher education are being challenged, and incremental adjustments will not be sufficient. Read the full report here.

From a broader educational perspective, the report articulates a tension that institutions across Europe are already experiencing. Generative AI has not simply introduced a new tool. It has altered the conditions under which learning, assessment and trust operate. The key question is no longer whether students use AI, but how higher education can continue to certify competence in a way that is academically credible, pedagogically sound and legally robust.

FROM LEARNING SUPPORT TO CERTIFICATION CHALLENGE

The committee is clear in its diagnosis. AI systems, particularly large language models, can support learning through personalized feedback, adaptive explanations and scalable guidance. Used well, they may even strengthen certain forms of learning. However, when it comes to assessment, the same technologies fundamentally blur the boundary between student performance and system output.

This strikes at the core responsibility of higher education institutions: to stand behind the qualifications they award. If an institution cannot reasonably verify what a student has demonstrated themselves, the value of certification is weakened both academically and societally.

It is in this context that the committee issues its most debated recommendation: a clear warning against relying solely on noncontrolled assessment forms such as takehome exams and unsupervised written submissions. The problem is not that these formats lack pedagogical value, but that they can no longer function as the sole basis for certification in an AI-rich environment.

A SHIFT IN ASSESSMENT DESIGN, NOT A RETURN TO THE PAST

Importantly, the report does not argue for a simple rollback to traditional invigilated exams. Instead, it calls for more deliberate combinations of assessment forms, where controlled elements play a clearer role in verifying individual competence, and where learning oriented activities can still flourish alongside them.

Equally significant is what the committee advises against. The report explicitly discourages the use of AI detection tools in grading and misconduct cases, citing concerns around accuracy, bias and student legal protection. This is a crucial point: trust cannot be rebuilt by replacing human judgement with opaque detection technologies.

At the same time, the committee acknowledges that AI may have a role to play within assessment processes themselves including grading support and the production of feedback provided that issues of transparency, traceability and accountability are addressed. Sensibly, it stresses that grading carries higher stakes than feedback and therefore requires greater caution.

WHAT THIS MEAN FOR THE NORWEGIAN HIGHER EDUCATION SECTOR

Alltogether, the report outlines a future in which Norwegian higher education must:

Reestablish clear links between assessment design and certification.

Invest in assessment literacy and AI competence among academic staff.

Develop shared sector approaches, rather than fragmented local solutions.

Accept that assessment systems and platforms are now strategic infrastructure, not neutral backdrops.

For institutions, this is not merely a pedagogical challenge. It is an organisational and technological one. Assessment at scale across programmes, faculties and institutions requires systems that can support complex assessment designs, controlled and noncontrolled elements, rich documentation, and increasing demands for transparency and justification.

THE FUTURE PERSPECTIVE

From a UNIwise and WISEflow standpoint, we read the report as a strong confirmation of a direction we have already been moving in. For years, the Norwegian sector has been characterised by a high degree of digital maturity in assessment. Platforms like WISEflow have enabled largescale digital exams, diverse assessment formats and more consistent handling of grading and feedback. The committee’s report reinforces the idea that this infrastructure now becomes even more critical - not less. In particular, we see four implications:

First, assessment platforms must support flexible but controlled assessment design. The future is not one format, but wellconsidered combinations. Systems must make it easier, not harder to design such assessments at scale.

Second, transparency and documentation will be key. As discussions around AI-assisted grading and feedback evolve, institutions need systems that can clearly document workflows, human oversight and decision points. Trust is built through traceability.

Third, feedback and justification are becoming more central. The report highlights the importance of feedback for learning, while recognising its different risk profile compared to grading. This aligns closely with ongoing developments around digital feedback, rubrics and justification areas where structured system support matters.

Finally, sectorwide consistency matters. One of the risks in the current moment is that institutions respond in fragmented and reactive ways. As one of the main suppliers to Norwegian higher education, we see it as our responsibility to work closely with institutions and authorities to ensure that technological development supports shared principles, not isolated fixes.

 

ALIGNMENT IN PRACTICE: ASSESSMENT BY DESIGN, NOT BY DETECTION

From a UNIwise and WISEflow perspective, the committee’s recommendations resonate strongly with principles that have guided our platform design from the very beginning.

WISEflow was built as a platform for many different assessment designs, not for a single dominant exam format. In particular, it was designed to support combinations and sequences of assessment activities controlled and noncontrolled, formative and summative rather than relying on one-off, standalone exams such as takehome submissions as the sole basis for certification. This flexibility is not incidental. It reflects the understanding that robust assessment in higher education is achieved through design, not through any single format.

In the same spirit, WISEflow has from the outset deliberately refrained from implementing AI-based detection in originality checks. The reason is straightforward and closely aligned with the committee’s conclusions: the attribution of text or output to a human or an AI system cannot be done with sufficient certainty, and the use of such tools raises serious concerns regarding transparency, bias and students’ legal protection. Safeguarding academic integrity must not come at the expense of students’ rights or due process.

At the same time, we see clear potential for AI to play a constructive and responsible role within assessment processes themselves. Particularly in areas such as AI-assisted feedback and grade justification. When used as support for academic staff, and embedded in transparent, well-defined workflows, AI can help scale high quality feedback, strengthen consistency, and support clearer articulation of grading decisions, without undermining human judgement or accountability. This distinction between assisting academic processes and automating certification is fundamental.

Taken together, these design choices place WISEflow firmly in line with the direction outlined by the expert committee: moving away from one-dimensional assessment models and questionable detection technologies, and towards well-considered assessment designs that combine pedagogical intent, institutional responsibility and carefully governed use of AI.

LOOKING AHEAD

The committee’s preliminary report is not a conclusion but it is a turning point. It makes clear that AI is not a temporary disruption, and that assessment practices must be redesigned with this reality in mind. For Norwegian higher education, the challenge now is to move from alarm to action: from uncertainty to deliberate design. This will require collaboration across institutions, disciplines and system providers.

As one of the main suppliers to the Norwegian higher education sector, we see it as both a responsibility and an opportunity to contribute constructively to this transition. We look forward to continued collaboration with institutions and authorities in Norway to further develop and refine assessment solutions that are fit for an AI rich future and, where appropriate solutions do not yet exist, to help design them together with the sector.

At UNIwise, we believe that the future of assessment lies in thoughtful integration of pedagogy, policy and technology. The conversation initiated by this report is both necessary and welcome and we look forward to contributing actively to what comes next.

 

FREQUENTLY ASKED QUESTIONS

 

What is the Norwegian expert committee’s report and why does it matter?
The report is a set of preliminary assessments from the Norwegian expert committee on artificial intelligence in higher education, with a strong focus on assessment, grading, and certification of learning outcomes. While it is not a final policy document, it sends a clear signal to the sector: AI is challenging the foundations of assessment, and incremental adjustments will no longer be sufficient.
What is the core challenge posed by generative AI in assessment?
The core challenge is that generative AI blurs the boundary between a student’s own performance and system-generated output. When institutions can no longer reasonably verify what a student has demonstrated independently, the credibility of certification is weakened academically, legally, and societally.
Do the recommendations imply a return to traditional invigilated exams?
No. The report does not argue for a simple rollback to traditional exam formats. Instead, it calls for more deliberate combinations of assessment forms, where controlled elements play a clearer role in verifying individual competence, while learning-oriented and formative activities can still thrive alongside them.
Is there a responsible role for AI within assessment processes?
Yes, but with clear boundaries. AI can play a constructive role as support within assessment processes, for example in feedback or grading assistance, provided that transparency, traceability, and accountability are ensured. Crucially, AI should assist academic judgement, not automate or replace certification decisions.
How do UNIwise and WISEflow already live up to these recommendations and how have they done so from the early days?

From a UNIwise and WISEflow perspective, the committee’s recommendations align closely with principles that have guided the platform’s design from the very beginning. 

First, WISEflow was built to support many different assessment designs, not a single dominant exam format. Specifically, it was designed to enable combinations and sequences of assessment activities including controlled and non-controlled, formative and summative elements rather than relying on one-off take-home submissions as the sole basis for certification. In other words, robustness comes from assessment by design, not by a single format.

Second, WISEflow has from the outset deliberately refrained from implementing AI-based detection in originality checks. The rationale is that attributing text or output reliably to a human versus an AI system cannot be done with sufficient certainty, and such tools raise serious concerns around transparency, bias, and students’ legal protection meaning academic integrity must not come at the expense of due process.

Finally, the approach is not “no AI” it is carefully governed AI. The text highlights a constructive role for AI within assessment processes (for example, AI-assisted feedback and grade justification) when used to support academic staff inside transparent, well-defined workflows strengthening consistency and scaling high-quality feedback without undermining human judgement or accountability.

RELATED ARTICLES