AI is in Norwegian boardrooms – without safeguards

Photo of cover of report
Research centre DIG in association with Orgbrain have released the survey "AI in the Norwegian boardroom"
By Arent Kragh

9 April 2026 12:08

AI is in Norwegian boardrooms – without safeguards

A national survey compiled by research centre DIG in cooperation with Orgbrain and DIG partner KPMG shows that AI is already shaping how Norwegian board members prepare, deliberate, and engage with complex issues.

The survey also shows that most boards have no safeguards and regulations in place on what data to feed the AI-agents. What remains unresolved is not whether boards use AI—but whether they do so with adequate structure, shared norms, and institutional oversight.

Based on 777 responses from active board members, collectively representing more than 2,100 board seats, the study moves beyond principle and speculation to examine real practices, risks, and emerging governance challenges.

Researchers Bramm Timmermans, Lasse B. Lien and Erik Lang from DIG have worked on the survey.

Widespread use, uneven practice

Three out of four board members report using AI tools in some form in their board work. Usage ranges from regular, integrated practice to cautious experimentation. Some board members rely on AI daily across preparation, analysis, and information gathering. Others restrict its use to narrow administrative tasks, such as summarising board papers or drafting agendas. Today’s boardroom therefore contains fundamentally different AI practices operating side by side.

Where AI delivers value

AI’s strongest perceived value is where tasks are bounded and repetitive. Nearly four out of five users report time savings in board preparation. Drafting communications, summarising documents, and note‑taking dominate reported use.

More analytically demanding areas—risk management, strategic scenario planning, and financial analysis—are far less common and generate weaker perceived benefits. Even among experienced users, AI is not yet seen as transformative in the highest‑stakes aspects of governance. One boundary holds across all groups: boards do not want AI as a decision‑maker.

The governance gap

While AI usage is widespread, governance has not kept pace. In many organisations, AI has never appeared on the board agenda, and AI tools are rarely provided or approved centrally.

As a result, board members self‑select tools—often public or individually paid services—that have not passed procurement review or security assessment. This governance vacuum has concrete consequences.  Sensitive board material flows into AI tools without organisational visibility or control. Where guidelines do exist, compliance is high.

AI-illustration

Researching the future of AI in Norway

As AI accelerates, researchers at DIG are investigating how Norwegian organizations can adopt and govern the technology responsibly and effectively.

Diverging AI user profiles

The study identifies four recurring AI user archetypes among board members. Thinkers use AI selectively for learning and insight, drawing clear boundaries around board-level use. Professionals integrate AI broadly across tasks, often combining training with paid tools. Efficiency Users focus narrowly on preparation and administrative efficiency and express strong unmet demand for training. Power Users employ AI intensively across tasks, share the most data, and voice the strongest concerns about security and transparency.

Sectoral differences

AI maturity also varies by board type. Civil society boards show low adoption and almost no formal governance. Public sector boards exhibit cautious use shaped by compliance constraints. Startup boards stand out for high‑intensity use driven by necessity. Large-company boards are the only group where formal AI guidelines appear in meaningful numbers.

Jonas Hammerschmidt during presentation

- I am not scared of AI. I am scared of stupid people

This was one of the most telling quotes that our Assistant Professor Jonas Hammerschmidt mentioned when he summed up the findings from a “deep-dive” case study of responsible AI practices in four major Norwegian corporations.

What are board members asking for

Board members says that they want most is not more advanced technology, but institutional legitimacy: secure, board‑approved tools; clear usage guidelines; and training that addresses both opportunity and risk.

AI is already in the Norwegian boardroom. Among engaged board members, its presence is no longer in doubt. What remains uncertain is whether governance frameworks, shared norms, and oversight structures will develop quickly enough to make AI a managed element of board work rather than an invisible one.

The risk is not that boards will fail to adopt AI. The risk is that AI will continue to shape board practice without boards fully accounting for it.

Read more and download the full report here.