Skip to content
Steffen SkovfogedApr 30, 20266 min read

Happy Academics: The Metric That's About to Matter

Most universities can tell you how many exams they processed last semester. They can tell about submission rates, turnaround times, platform uptime. What almost none of them say anything about, and this is after 15 years of the sector going digital, is whether any of it is actually making their academics' lives better.

Not whether the system works. Whether it helps the academics.

Whether it reduces the time an academic spends on assessment. Whether it builds their confidence in the process. Whether it makes the hard parts of their job, the repetitive marking, the opaque workflows, the constant second-guessing about fairness and consistency, a little less hard.

That's the conversations we need to have and that we decided to make one of the central themes at our annual Partner Event this April. Not a product roadmap. Not a feature showcase. A conversation about the people who use WISEflow every day and whether we're genuinely serving them well.

 

Partner Seminar, 2026, opening by UNIwise CEO

A GAP THAT EXISTS EVERYWHERE

Here's a pattern seen repeated in every country we operate in. The people who choose and manage a digital assessment platform: admin teams, project leads and IT are rarely the same people who use it to author exams, mark student work, and give feedback. The administrators and the academics often sit in different parts of the organisation, attend different meetings, and have different definitions of what "working well" means.

The result is a structural gap. When the platform creates value for academics, nobody captures that signal. When it creates friction, the feedback gets filtered through layers of interpretation before it reaches anyone who can act on it, and by the time it does, the moment has often passed. The friction may have been straightforward to fix, but the trust that eroded while it went unheard is much harder to rebuild. Nobody is failing. The structure just doesn't carry the signal fast enough.

Many universities do involve academics during procurement or tender — and rightly so. But once the platform is live and becomes business as usual, that recurring feedback loop often falters. The academics who were consulted during selection are rarely the same ones asked whether the system is working for them two years later.

This gap is one of the most underestimated problems in higher education technology. Not because it's dramatic - it's the opposite, it's invisible, but because every decision about digital tools gets made inside it. Roadmaps, procurement criteria, training budgets — all shaped with very little direct input from the people those decisions affect most.

WHAT WE LEARNED WHEN WE ASKED

At this year's event, I ran a session called "Happy Academics" with 18 senior partner representatives from universities across Europe. The format was simple: an honest self-assessment across seven dimensions of how close each institution is to the academics who use WISEflow. Not aspirational scores. Where things stand today.

The results confirmed what many of us suspected but hadn't quantified. Most institutions have some degree of contact with their academics, but it's patchy. Some have already advanced impressively in how they surface feedback from academics and channel it into decisions. The fact that these approaches exist, and work, tells you this is a solvable problem, not a structural impossibility.

But one finding stood out above everything else. When we asked whether institutions can tell if their assessment platform is reducing academic workload and building confidence, the answer was overwhelmingly: no. Across the group, this was the clearest consensus. The signal doesn't exist. We're all investing heavily in digital assessment, and we're measuring the wrong things — or not measuring at all.

That finding changed the energy in the room. Because it reframes the conversation. It's not about whether a platform has the right features. It's about whether anyone has built the feedback infrastructure to know if those features are landing.

Partner Seminar, 2026, Break out session

WHAT THE ROOM TOLD US TO DO ABOUT IT

After the self-assessment, participants were asked to propose concrete things we could try together. The response rate was high and the ideas were sharp. Four themes emerged:

  1. The first was a clear message about partnership: engage with our academics, but do it through us. Partners want to be the gateway, they'll host sessions, select participants, manage expectations, but they want ownership of the relationship. This isn't resistance; it's institutional knowledge about what works.

  2. The second was methodological: show, don't ask. Come to academics with concrete prototypes and examples, not open-ended questions. Let them react to something tangible. Reaction is more productive than invention.

  3. The third was about reducing friction in the feedback process itself. Make it lightweight. Make it embedded, inside the platform, at the moment the academic is working. Don't add another system or another meeting.

  4. The fourth was a principle we've taken to heart: only ask for input on things you're genuinely prepared to act on. If academics give their time and see nothing change, you've damaged trust rather than built it. Better to do less, credibly, than to over-promise.

WHAT WE'RE EXPLORING TOGETHER

These aren't just findings we've filed away. They're shaping the conversations we're now having with partners about what to try next.

Together with our partners, we're considering a baseline study to measure what nobody currently measures: whether WISEflow actually reduces academic workload and builds confidence in assessment. Not satisfaction surveys. Actual impact. This is the missing foundation, and we want to build it properly together with the institutions that are closest to the academics.

We're exploring the idea of partner-hosted friction audits, structured sessions where an institution invites a small group of academics and we walk through specific workflows together. The partner controls the process. The output would be concrete: here's what's hard, here's what can change, here's when.

We're discussing lightweight, in-product feedback mechanisms so academics could share a reaction in the moment they're working, not through a chain of forwarded emails weeks later. Configurable by each institution, so partners decide who sees it and what happens with the data.

And we're looking at exemplar-based focus groups: instead of asking academics what they want, bring concrete prototypes of what's possible and let a small group react. Only on topics that are genuinely on the roadmap. Only where we're prepared to follow through.

THE INSTITUTIONS THAT WILL LEAD

I've been in this sector long enough to know that the universities which get digital assessment right over the next decade won't be the ones with the longest feature lists. They'll be the ones that built the tightest feedback loop between their technology and those using it.

That loop doesn't exist at most institutions today. But it can. Our session showed that some partners have already found approaches that work, and those approaches are replicable.

We're looking for institutions that want to build this together. Not as a pilot programme with a glossy name, but as a practical, honest collaboration between a platform company and the universities it serves. The kind where we measure what matters, act on what we hear, and hold each other accountable for whether academics are actually better off.

If that sounds like the kind of partnership your institution needs, I'd welcome the conversation.

About the author

Steffen Skovfoged is Founder and Chief Growth Officer at UNIwise, the company behind WISEflow. He has worked in digital assessment and EdTech for over 15 years.

Sign up to our Newsletter

STAY UPDATED ON THE LATEST DEVELOPMENTS

RELATED ARTICLES