Researching the future of AI in Norway: Erik Lang on the DIG AI group

The AI-group at DIG, Erik Lang (left), Eirik Sjåholm Knudsen and Alexander Lundervold (top centre), Lasse Lien (btm centre), Bram Timmermans (top right) and Ivan Belik (btm right)
The AI-group at DIG, Erik Lang (left), Eirik Sjåholm Knudsen and Alexander Lundervold (top centre), Lasse Lien (btm centre), Bram Timmermans (top right) and Ivan Belik (btm right). Photo: Joakim S. Enger / Arent Kragh / NHH
By Maria Borghans Karlsen

3 March 2026 09:21

Researching the future of AI in Norway: Erik Lang on the DIG AI group

As AI accelerates, researchers at DIG are investigating how Norwegian organizations can adopt and govern the technology responsibly and effectively.

The DIG AI group is an informal group of researchers at DIG, interested in understanding and advancing artificial intelligence adoption and innovation in Norway. The group brings together researchers with complementary expertise, including Bram Timmermans, Lasse Lien, Alexander Lundervold, Eirik Sjåholm Knudsen, Ivan Belik, and Erik Lang. Their purpose is to conduct rigorous, interdisciplinary research that helps Norwegian organizations navigate AI-driven transformation responsibly and effectively.

“We aim to generate insights that strengthen Norway's competitive position while contributing to sustainable economic growth,” Lang expresses.

Where the research is headed

The group is currently focusing on four main topics: mapping the AI ecosystem, AI adoption and governance, responsible AI integration, and competence development. The first topic entails developing a comprehensive understanding of who is building AI technologies in Norway and how this landscape is evolving. This can be exemplified by the AI Report Norway 2025.

Through focusing on the second topic, the researchers examine how AI is being integrated into decision-making processes and corporate governance structures at organizational and leadership levels. Building on this, the third topic investigates practical challenges, risks, and best practices when organizations implement AI in sensitive contexts, including data security, accountability, and appropriate use. Lastly, competence development means understanding what skills and knowledge organizations and their leaders need to use AI effectively and responsibly in an AI driven business landscape.

Jonas Hammerschmidt during presentation

- I am not scared of AI. I am scared of stupid people

This was one of the most telling quotes that our Assistant Professor Jonas Hammerschmidt mentioned when he summed up the findings from a “deep-dive” case study of responsible AI practices in four major Norwegian corporations.

AI at the governance level

One of the projects currently being developed by some of the group members looks at AI in the Norwegian boardroom, and is Norway's first large-scale research program examining AI adoption in corporate governance. The project is conducted by Bram Timmermans, Lasse Lien and Lang in collaboration with a Norwegian board management platform called Orgbrain.

Lang explains that there is currently limited systematic knowledge about how AI is being used at the governance level where strategic decisions, risk management, and accountability happen. To address this gap, the research explores how board members are using AI, where AI creates value, what risks emerge, as well as what competencies boards need.

The researchers hope that the project can serve actors such as board members, organizations developing AI policies, and business leaders. In doing so, the project highlights the role of the DIG AI group in generating empirically grounded knowledge on AI adoption and governance.

Beyond the Hype: Is "Shadow AI" Stalling Our Strategic Edge?

Earlier this month, Samfunnsøkonomisk Analyse, in collaboration with NHO and our DIG partner Abelia, launched the definitive status report on the use of AI in Norwegian industry.