Norwegian version of this page

Legal guidelines for the use of Artificial Intelligence (AI) at the University of Oslo

This is a short introduction to the aspects you will need to consider regarding data security, copyright and data privacy before using AI-based technology.

Illustration of an artificially made head in blue

Photo: Colourbox

The GDPR (General Data Protection Regulation, or "personvernforordningen" in the Norwegian law) is the legal framework for data privacy in Norway. This law was passed by the EU, and governs whether and how personal information may be stored and processed.

Data Security

When using AI-based tools, you need to be mindful of which data and information you allow the tool to access. The vast majority of these tools need more resources than you have available on your own computer. In order to analyze the data you give to the AI-based tool, it will likely use servers, which are other computers that your computer is communicating with.

In addition to this, the information will be sent through the internet from your computer to the server, and new information will be transmitted back to your computer. If you want to use an AI-based tool as a student or employee at UiO, you will need to consider whether your planned use is safe and allowed by the applicable laws and regulations. This applies both to the server you plan to use, as well as the method of data transmission to that server.

If you are not sufficiently careful when considering the safety and legality of the AI tool, you may risk that the data you use with the AI tool falls into the wrong hands, or is misused.

At UiO, we have two categories of AI tools

Third-party tools

Third-party tools run on servers that do not belong to UiO, but are run by companies that have entered a data processing agreement with us. The data processing agreement means that we can exchange information with the server safely, but it has some limitations as to which types of data we can use.

UiO’s ChatGPT service (GPT UiO) is one such example of a third-party tool. The service runs on servers in Ireland. Because the servers are within the EU/EEA, and because we have a data processing agreement with Microsoft, we can safely use this service for general personal data such as names and phone numbers – in other words, green and yellow category data.

This means that you will need to be careful not to use special categories of personal data, often referred to as sensitive personal data. Special categories of personal data, for example data about a person’s health, religion, or political views. Non-sensitive data, such as student names, e-mail adresses and the students’ exam submissions, may be used in the GPT UiO tool.

Internal tools

Internal tools are tools that run on UiO’s own servers. When the tools run on our own servers, we have more control over the data being sent to the tool. This means that we can use it with more sensitive data than the case is with external tools.

UiO’s Autotekst service is an example of an internal tool. Autotekst can transcribe audio recordings to text with a high rate of accuracy, as well as translate speech to text in other languages. Because Autotekst runs on servers that UiO owns and controls, the service can be used for recordings that contain sensitive personal data. The service does not send any data to third parties, since all processing happens on our servers on campus.

However, you still need to be careful not to use unsecured networks when you send data to UiOs servers. If you are using an open network (internet access that does not require a username and password), others could be able to access the data you are uploading through the same network.

More information about the different types of data, and how you can process and store each type, can be found in the UiO data classification framework.

More information about which data you can safely store can be found in the UiO data storage guide.

Copyright

In addition to taking care that no personal data ends up in the wrong hands, you will need to be equally mindful that you are not violating the copyright of any data you use with AI tools. If you have access to a research article which is normally behind a paywall, you are not allowed to make this accessible to others, since it is subject to copyright and not intended to be shared. If you give an AI tool access to such an article, this may allow others to access the article.

Additionally, the knowledge and/or results in the article may be used to further train the AI model, and thereby end up being included in the tool and accessible to anyone else who uses the tool. If you are not being careful, you may risk violating copyright laws and agreements in this way. This can have both legal and economic consequences. However, if you are using the UiO GPT tool, this is not a problem, as the data will never be used to further train the model.

It is also important to remember that AI tools have been trained on vast quantities of data in order to become good at their purpose. In many cases, it is not known exactly what kind of data has been used for training, and there may be issues surrounding whether or not the use of data to train the model itself was legal. For instance, this is why ChatGPT was blocked by the Italian data protection authority, as they stated that the company behind ChatGPT did not have the legal right to use the data they used for training the model.

Published Aug. 25, 2023 2:24 PM - Last modified Jan. 4, 2024 1:54 PM