Artificial intelligence in The Brønnøysund Register Centre
In the strategic plan of The Brønnøysund Register Centre, under “Ambisjoner” it is formulated that we are going to explore and in a responsible way use the possibilities artificial intelligence provides us with. As a result of this, we have decided on a policy regarding our use of artificial intelligence, created by our AI-team. The policy provides the following guidelines for all our employees:
You should never:
- share personal information, such as information about employees and citizens in external services
- share sensitive information, such as mission-critical information
- share user information, such as login information and passwords
- cut and paste or use internal or sensitive information, documents or excerpts of these in an external AI service, unless anything else has been agreed and document
- trust the AI to tell the truth – always check the facts
- use only AI to make decisions which in any substantial way affects individuals
You should always:
- be critical of everything you read – watch out for inaccuracies, biases and false information
- be critical of whether AI violates copyright, especially for a third party
- specifically mark content which is fully or partly created through generative AI. For instance “This illustration is created through the help of AI”, in addition to crediting the specific AI service
- note that you are responsible for an AI-generated text whenever you publish it or send it to anyone
- be transparent and inform about it when you are using an AI-generated text, a photo or something else
- Note that AI makes it easier for imposters to scam someone. Always contact our service desk if you have doubts about whether something is real or fake
If you wish to contact our AI team, please use our contact form. You can read our policy here:
The Brønnøysund Register Centre has, as one of its four strategic ambitions, to “explore and in a responsible way exploit the possibilities provided by artificial intelligence”. This should contribute to meet our social responsibility, which is to “contribute to increased added value through managing register data in a safe and clear manner, in order to create confidence and renewal of society. Artificial intelligence (AI) is an area within information technology which develops rapidly. Public authorities’ development and use of AI can provide large profits in the form of efficiency improvement and saved resources, and as a result of this, large financial and welfare benefits for society. At the same time, there is a certain risk connected to the use of this type of technology. AI systems use large quantities of data, which also includes personal and sensitive data subject to restrictions in the legislation.
As a government agency, we depend on our users having trust in the services we provide. For this reason, it is important to secure that the use of AI is legal, fair and responsible. We have to actively follow up on this use this as a basis when we consider how to use AI responsibly and transparently.
In the national strategy for artificial intelligence, the government points to the development and usage of AI in Norway being built on ethical principles, and must respect human rights and democracy. Based on this, we have reached five basic ethical principles for the development and usage of AI. The principles must be normative for all situations where we use AI, and the purpose is to provide the main guidelines for what in the end will be a general framework for AI. The five basic ethical principles are:
- responsibility
- transparency
- fairness
- security
- privacy protection
The purpose of the policy is to make sure that everyone in The Brønnøysund Register Centre using or developing AI systems does so in a legal and responsible manner. In order to attend to this purpose, the policy will be regularly updated, in correlation with the development of the technology, so it is important to keep it continuously updated on possible changes.
This policy applies for everyone in The Brønnøysund Register Centre who will use or develop AI systems. This includes both employees and contract workers. In addition, there are separate guidelines for those developing AI systems, or using AI systems in other development projects. These guidelines entail a larger ethical and legal responsibility, which is evident in the principles.
As of today, there does not exist Norwegian legislation regulating the use or the development of AI. The EU has passed the AI Act, which will be a part of Norwegian legislation gradually. The AI Act is legislation concerning product liability, regulating the development of AI systems, and the integration of the systems in organisations.
However, as a government agency we are bound by other legislation, which will also be relevant in relation to the use of AI. We are, amongst other things, bound by the Public Administration Act when it comes to case processing, the Freedom of Information Act for transparency, the Archives Act for archiving public information, and the Personal Data Act for the protection of personal information. It is important to note that the use of AI requires compliance with Norwegian regulations, and that the use of AI can bring about greater risks for some legal areas, such as personal data protection.
An algorithm is a set of step-by-step instructions in a specific order created to achieve something. Machine learning uses mathematical and statistical methods for the creation of algorithms which are based on analysis of data. The training of a system can take place through supervised learning, unsupervised learning or reinforcement learning. Artificial intelligence has various definitions, some comprehensive and some narrow. What is common for the definitions is that they define technology capable of systemising and interpreting data, followed by making decisions and learning from experience.
There is a difference between an AI model and an AI system. An AI model is a mathematical or algorithmic structure trained through using data to solve specific tasks. An AI system, on the other hand, is a complete package containing one or several AI models, as well as software etc. to implement and take models into practical use. This includes everything necessary to implement the model, for instance an app.
An AI model is a part of an AI system. Traditional AI is based on rules and logic, and is dependent on rules and programming defined by humans. It often focuses on tasks such as classification, decision-making and problem-solving.
Generative AI refers to models which can generate new data or content based on existing information. It is capable of generating text, images, music or other types of content. ChatGPT is an example of a generative AI system which can generate text, but there are also examples of systems which can generate images and sound, such as DALL-E and the web service Suno.
An AI user is a person using (pre-trained) AI services to solve a task. Whoever uses an AI service must have received training, but does not need to understand how the specific model reaches its conclusion.
An AI developer is a person developing and training new AI models to do a specific task. The person developing AI often has to relate to factors such as data quality, data amount, accuracy connected to predictions etc. The AI developer is responsible for ensuring that the model is developed according to the ethical principles and other legal requirements.
ChatGPT is one out of many large developed language models available for users. These are generative AI models used to communicate and answer questions in a way which imitates human conversations. It is taught through feedback, and adapts its reply based on instructions. External AI solutions are trained using all publicly accessible data online. ChatGPT is an example of this. The use of ChatGPT involves a risk that the information being shared might also be shared externally to others outside of our offices. Internal AI solutions are limited to only have access to the internal data of an organisation, for instance CoPilot. This means that sensitive data can be shared without any risk of this being shared with unauthorised persons. The Brønnøysund Register Centre does not have this type of solution as of today.
6.1. Accountability
AI uses data and information both in order to provide results or develop products, and to train algorithms. We are to use AI only in accordance with this policy and the current legislation, including the Personal Data Act and the General Data Protection Regulation, the Copyright Act and other legislation about intellectual property rights, the Working Environment Act etc. In addition, it is important to critically examine the results generated by AI, and to fact-check results that seem dubious.
We are committed to handle information from the registers, our user and employees in a secure way, which means that we must keep information and data confidential. Breaches of these routines, laws and regulations, agreements and actions, which inflict damage can make us as an organisation responsible for this.
As a government agency, we have a large responsibility to maintain good security routines and trust in our services. As an employee or hired by The Brønnøysund Register Centre, you have to go through this policy before you take AI into use, in order to understand what is required from you as an AI user. It is important that you keep track of the relevant laws and regulations which apply for the specific AI system you are going to use. If you have any doubts as to what applies or the consequences of what you can do, you must first acquire knowledge about how the AI system works. If The Brønnøysund Register Centre implements its own internal AI services, training must be provided for users of AI.
6.2. Transparency
We are obliged to provide users and employees security and trust through transparency about how we use AI. AI can be unfamiliar technology for a lot of people, and this gives us a responsibility to inform and explain what might be the consequences for each individual. Transparency includes openness towards both employees and users of our register services about the usage of our AI systems. In the employees’ usage of AI in their work, it is important to separate between using AI as decision support, and the use of AI to replace human labour. When AI is used for decision support, it has to be clear for the employee how the AI system is supposed to be used.
If we include AI directly in our communication with our users, for instance in guiding or making decisions, transparency requirements in administrative law and legislation related to personal data will apply. It is particularly important to make sure that the users understand that they are communicating with, or that a decision is generated from technology based on AI. This is necessary for the recipient to be able to attend to his or her own rights and interests, or to be able to challenge a decision. We have to provide information for the user about the AI system in a meaningful and understandable way.
6.3. Fairness
For us to be able to use AI in a fair way compliant with our values, it is important that we are aware of the various risks related to the AI systems.
For instance, it is important to be aware of the fact that generative models, such as ChatGPT, can provide biased or discriminating statements, in addition to statements that are incorrect. It is important to note that AI systems do not have human qualities such as morals, common sense, logic and intuition. This affects the quality of the replies produced by generative language models. In order to minimize the risks, it is important to have humans check and assess the replies provided by AI. It is especially important when AI is used as a part of problem solving and where it can have direct consequences for the legal status of individuals and legal persons. Human follow-up and control can contribute to ensure that an AI system does not undermine human autonomy or causes other unpredicted negative effects.
6.4. Security
The introduction of AI can enhance existing security risks, or create new risks. When using AI, we are to follow both security instructions, control systems for security and underlying operational documentation related to the use of AI. This includes, among other things security risk management, directions for information processing and classification, and handling deviations within security and privacy protection. It is not allowed to submit sensitive information or higher classified information to an AI service.
Please be aware that threat actors also use AI. AI has made it easier for them to start using new tools and methods, and direct better and more versatile attacks towards systems and individuals. AI can be used by attackers to generate convincing and misleading content used to imitate others, and as a result of this increase the probability for a successful attack.
Sufficient routines for security and training are important when it comes to the use of AI. Employees and contracted workers developing or using AI-based services must be sufficiently trained for safe usage of the system. This must include training within security, privacy protection, administration of data and reporting of suspicious and unwanted events (deviation).
6.5. Privacy protection
Even though transparency is a principle in decision-making, all individuals have a right to privacy. We must ensure that privacy protection is attended to, when working with AI. In addition to the requirements in the General Data Protection Regulation, there are also limitations in the Security Act, the Public Administration Act and the various special laws as to which information we are allowed to share.
We are not allowed to share personal data or other confidential information without showing to a specific legal authority. Personal data includes all types of information which can be connected to a physical individual, directly or indirectly. This can include information such as name, address, telephone number, e-mail and national identity number. At the same time, it will also include information from various sources which on its own is not enough to identify a person, but collectively it will be sufficient. In addition, we are not to share proper names, such as company names, sensitive information from our registers, or personal names with AI solutions that are not a part of our internal systems.
Sensitive information about individuals from our registers can be for example information about bankruptcies and disbursements. At the same time, there are specific requirements in the various special laws, which we cannot distribute, such as national identity number. This is because such information might be the output from the system and become accessible to others. We use one precautionary principle: If you are in doubt as to whether the information is confidential or not, do not share it in an AI service.
7.1. What can we share and transfer to artificial intelligence?
AI systems learn from the data provided. In generative language models using external search engines, like ChatGPT, all questions and instructions will be used as learning data to improve the service. This entails that information put into such language models can be available for others who use the service, also outside of our organisation. For this reason, we are not to share or transfer information which is, or might be, classified or confidential, to an external AI system. By «sharing» or «transferring» we mean all ways in which you can put in information into AI services, like copying text and adding it in the service, uploading documents, making connections between AI systems etc.
Confidential information can be for instance accounting figures, budget, notes or minutes from meetings, presentations, agreements etc. or information about others, such as clients, partners etc. The crucial point is whether the information we share is something we normally would not have shared with someone unknown. The information can be confidential either because it is something we do not want others to know about our organisation, or it can be information which we are obliged by law to keep confidential.
Legislation connected to non-disclosure must ensure confidentiality, that is; confidential information must not be available for unauthorised persons. In general, we should show a large degree of caution when using graded information as input in AI systems. Sharing information in such AI systems can also be irreversible. This can affect citizens’ rights based on personal data protection law, such as the right to remove personal data. The right to access to information which has been retrieved and used can also be violated, unless we have control of how personal data is being used in the AI system.
7.2. The use of results from artificial intelligence
AI based on language models is trained using large quantities of data which are not necessarily quality-assured or verified. This means that results from the use of AI systems can be inaccurate and sometimes incorrect. Because of this, it is important to be critical of the results, and verify them to the fullest extent possible. It is particularly important to be aware of this if AI is being used as a means within decision-making, which might have consequences for an individual. If the data being used to train an AI system reflects unwanted social prejudice, this might bring about biased results. It is therefore important that we are aware of these risks, and that AI dos not replace human assessments and decisions required by law. Since AI algorithms are trained using external material, the generated result may breach the copyrights of others. This can include text, images, audio/music, videos etc. The result of the use of AI should therefore not be published unless you are confident that they do not violate others’ rights. It should be marked that the results are generated by AI.