Til hovedinnhold

Ansvarlig kunstig intelligens

Vårt synspunkt om ansvarlig bruk av kunstig intelligens (KI)

15. august 2023

Nicolai Tangen, leder for Norges Bank Investment Management, Carine Smith Ihenacho, Direktør for eierskap og etterlevelse og  Eierskapsavdelingen.

  • Vi tror ansvarlig utvikling og bruk av kunstig intelligens (KI) vil være viktig for velfungerende markeder og legitime produkter og tjenester - og at dette kan påvirke den finansielle avkastningen til fondet over tid.
  • Vi støtter utviklingen av regulatoriske rammeverk for KI som muliggjør sikker innovasjon og forebygger av negative konsekvenser for mennesker og samfunn.
  • Vi mener ansvarlig utvikling og bruk av AI innebærer: i) styreansvar, ii) åpenhet og etterprøvbarhet, og iii) robust risikostyring som i tillegg til forretningsrisiko adresserer privatliv, sikkerhet, likestilling, og menneskelig tilsyn og kontroll.

Resten av synspunktet er tilgjengelig på engelsk

Relevance to us as a long-term financial investor

As AI becomes ubiquitous across the economy, it is likely to bring great opportunities but also severe and uncharted risks – both for the companies we invest in and for the stakeholders affected by their activities. AI can be a powerful tool to augment business models and human processes, and can result in significant gains for companies. However, AI continues to develop at a pace where it can be challenging to predict and manage risks. Beyond regulatory, operational and reputational risks to companies, development and use of AI systems can impact society at large and human rights such as privacy, security, personal freedom and non-discrimination. It can increase the risk of large-scale misinformation, deception or manipulation.

As a long-term, diversified financial investor, we believe that we will benefit, through the companies we invest in, from the development of comprehensive and cohesive regulation of AI, which can contribute to safe innovation and market predictability. The development of global mechanisms of accountability can reduce risks and create a basis for long-term value creation, for example by providing certainty on liability related to the development and use of AI, and by contributing to mitigation of adverse impacts through clear safety standards. Many experts and companies have pointed to an international oversight body as a suitable solution. We encourage the companies we invest in, in particular those that own or develop AI systems, to engage constructively and transparently with standard setters and regulators. 

It is nevertheless the people and companies developing and using AI that will be the main drivers of its impacts on people, society and long-term economic growth. As an investor, we view responsible development and use of AI as a core element of responsible business conduct – and a necessary complement to the emerging regulatory landscape. AI governance systems should be at the basis of company innovation and adoption of AI and consider potential impacts across the AI value chain.  Company approaches should build on internationally acknowledged standards such as the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises on Responsible Business Conduct. Although all AI actors should approach AI responsibly, we believe a particular responsibility rests on companies that develop or own AI systems. AI governance and due diligence processes must be proportionate to potential impacts.

Key elements of responsible AI

Board accountability

The board of directors is accountable for companies’ responsible development and use of AI. We believe boards play a key role by overseeing that corporate governance and strategy balance competitive deployment of new technology against potential risks – including risks to people and wider society. This will require board expertise and resources that are proportionate to the company’s risk exposure and business model.

Business-relevant AI policies and guidance are essential starting points for robust AI governance systems – and should be overseen by the board. Company AI policies and guidance that are aligned with internationally acknowledged standards for human rights and responsible business conduct, as well as relevant AI guidelines such as the OECD AI Principles and UNESCO’s Recommendation on the Ethics of AI, can ensure minimum safeguards are in place. AI governance structures that integrate sector-specific tools and best practices, are tailored to the company’s business model and specific use cases for AI, and are regularly updated, are necessary to manage evolving risks and seize opportunities. 

We also believe boards play an important role in overseeing a company culture of responsible AI stewardship to ensure implementation of AI policies and guidance across the business.

Transparency and explainability

Transparency and explainability are essential for building trust and accountability in AI systems, but also a challenge as the complexity of AI models evolves. Companies should be able to explain how the AI systems they develop or use have been designed, trained and tested – and how they align with human values and intent. Stakeholders should be enabled to assess the potential impacts of AI systems and understand their accuracy, efficiency and reliability. Companies should also provide information to relevant and trusted third parties, such as auditors or regulators, to allow them to verify the AI system and assess its risks. 

We also see transparency as key to gaining informed consent from AI users and ensuring legitimacy among broader stakeholders. It should be clear when a person is interacting with or affected by AI systems, including synthetic content. Informing people of how AI is used in outcomes that affect them, and providing appropriate access to remedy, are also core components of responsible business conduct and key to mitigating adverse impacts. 

Robust risk management

Companies must be proactive in their management of AI-related risks and be transparent about their objectives for developing and deploying AI systems. Risk management processes should be robust and proportionate to the company’s risk exposure, and seek to identify, assess and mitigate risks to business, people and society. In addition to ensuring business resilience, AI risk management processes should address broader impacts and safeguard privacy, security and non-discrimination, and ensure effective human oversight and control.

Risk measures should include evaluations of system limitations and the potential consequences of system failure. Development or deployment of AI systems that can pose particularly severe risks to people, society or business outcomes should be subject to additional controls. Risk management processes should also have appropriate safeguards to manage the risks of misinformation, deception or other adverse impacts. Importantly, we believe AI systems, guidelines and risk management processes should be independently verified and regularly audited over time.

Companies that develop and use AI have a responsibility to prevent and mitigate risks across the AI value chain in situations where they cause or contribute to adverse impacts, including impacts on people in vulnerable situations. Diverse teams, datasets and stakeholder engagement, and appropriate oversight structures, are important for effectively preventing and mitigating unintended impacts or the perpetuation of harmful biases. We believe companies should take particular care and give due consideration to impacts on people in vulnerable and marginalised situations, such as children. 

Developments to watch

Our view on and approach to responsible AI is likely to evolve over time to reflect risks and opportunities for our portfolio. We will pay close attention to the following:

  • Development of international standards and regulation for companies.
  • Companies’ implementation of responsible AI policies, guidelines, risk management and related. reporting.
  • Risks related to privacy and security, and misinformation, deception and manipulation.
  • Effects on inequality and discrimination.
  • Long-term effects on the workplace and companies’ human capital management.

As a financial investor, we will work at the market level to support the ongoing development of international standards and regulation that impact the companies we invest in, integrate risks and opportunities into our portfolio management, and engage with companies on responsible development and use of AI.