‘Building trustworthiness in data and AI’: Rasmus Blok on the D-Seal and European standards
Rasmus Blok, Co-founder and Executive Director at UNIwise, recently travelled to Brussels to deliver the keynote speech at an event on the launch of the D-Seal, ‘Denmark’s new labelling program for IT-security and responsible use of data,’ at the Danish Permanent Representation to the EU.
The D-Seal ‘will create digital trust for customers and consumers and drive digital accountability in companies.’
Here, Rasmus talks about why he supports the promotion of the D-Seal, how it can support the EU’s objectives, and why UNIwise wants to encourage a discussion on European standards in light of the AI Act:
The AI-Act, voluntary labelling and the D-Seal
In Europe today, there is a developing situation involving the use of AI and providing trustworthiness to customers and end-users. The context of this is that many universities – much like public institutions – are subject to various legislation, such as General Data Protection Regulations (GDPR), E-privacy laws and European standard contractual clauses. It is difficult to fully understand this situation, and many are confused over what they are allowed to do, what they cannot do and what is required of them.
All of this is a bit of a blur, and at the event on the launch of the D-Seal, I began by noting that there is a new European AI Act coming into legislation soon. This Act deals with what is deemed to be ‘high-risk’ AI. This refers to situations in which AI is used for decision-making, such as in self-driving cars, and does not apply to the low-risk functions we at UNIwise use AI for, such as facial recognition in online proctored examinations, where the AI guides us, and does not make our decisions.
This low-risk area is not regulated as the high-risk one is; however, it is proposed as being subject to voluntary labelling. This means that if we wanted to prove to an institution our trustworthiness and that we follow the principles of data ethics, we would need to obtain a label of our own, voluntarily.
This context is made more confusing in that voluntary labelling in the AI Act is essentially something that EU member states are being left to figure out for themselves. This could very easily lead to a situation in which each member state requires a new label or certification, and we as providers would then have to travel to each region specifically in order to be certified. Furthermore, because this is voluntary and there is so much confusion involved, clients would most likely ask us the same questions over and over again and would have no way of knowing which risk category we are in.
In order to avoid this situation, we need to find a consistent labelling system for all member states. That’s why I was promoting the D-Seal: it is a Danish certification which talks about and supports the trustworthiness of data and AI. And I want to promote it as a European standard.
The D-Seal certification hails from Denmark because we are one of the most digitised societies in the world, and we are very commonly used to, and associated with, soft-AI. Within two months of the Covid-19 pandemic hitting, we had a national app developed and in use; we are quick and agile in our digital responses.
The EU’s objectives, and how the D-Seal can support them
The event and subsequent discussions were quite quickly directed towards our agenda, and this is because the EU has, in this situation, two main objectives.
The first of these is to create trustworthiness to, and for, European citizens. This, of course, goes without saying. The second objective, however, is to provide as small a burden as possible for small and medium-sized enterprises, such as UNIwise. The aim is to protect us and help us to avoid unnecessary costs. The fact of the matter is that if we do not adequately prepare for this situation, it could result in more untrustworthiness and a relative increase in costs across the board.
Currently, the D-Seal is only really a Danish certification, with twelve companies that have attained the label. However, the intention is to get the D-Seal used more as a European standard. This would allow for smaller businesses to obtain a voluntary label which supports competent IT-security and data ethics, identifying them as low-risk in terms of the AI Act and thereby supporting the EU’s objectives.
Why does UNIwise support this discussion?
From our perspective, this isn’t just about AI regulations or the D-Seal, but IT-security in general. There’s a lot of noise being generated because of recent rulings and legislation, and scary scenarios are being thrown around – we want more consensus to understand what it is that’s happening, what we actually need to worry about, what the standards are that we need to maintain, and so on. The D-Seal can be an icebreaker for this discussion.
Beyond this, of course, the certification functions as a value proposition to secure IT-security and ensure we are EU-responsible. Trustworthiness is of paramount importance – it’s not just about being accountable in terms of service delivery and having a strong sales team and good account management – it’s also about the things you don’t see, behind the scenes. We need to be doing the right things in terms of data ethics, and make sure that we aren’t just talking the talk but walking it too!
That’s why I spoke in support and promotion of the D-Seal in Brussels: we need a consistent and responsible labelling system across Europe which builds trustworthiness in data and AI, in order to alleviate the burden on smaller businesses and support the EU’s objectives.
You can read more about UNIwise’s approach to data privacy here.