- I am not scared of AI. I am scared of stupid people
This was one of the most telling quotes that our Assistant Professor Jonas Hammerschmidt mentioned when he summed up the findings from a “deep-dive” case study of responsible AI practices in four major Norwegian corporations.
The findings from the survey were presented recently at round table discussion for DIG partners. The survey shows that the adoption of AI in large Norwegian corporations have moved from experimental pilots to being part of the core infrastructure in the company. The focus is thus shifting away from how to build AI and towards issues of governance and how to control it and use it responsibly.
The “illiteracy problem”
The “deep-dive” study includes large Norwegian corporations Telenor, Gjensidige, KPMG as well as Equinor. Summarizing the findings, the main bottleneck for responsible AI seems to be a degree of AI illiteracy, sparking the quote from one participant in the survey. There is also a fear amongst AI-users and specialists in the organizations that senior management is basing decisions on an insufficient understanding of what AI can do and be used for.
- You can rely on AI if you understand it, was one the statements found in the survey.
You can’t rely on the IT-department
Another striking conclusion is that implementing the use of AI and related tools are no longer a matter you can rely on the IT-department alone in this transition. Responsible AI requires sustainability to be fully integrated in the process of adapting to AI, with sustainability experts embedded more widely in the organisation is what respondents in Jonas Hammerschmidt’s case study suggest.
The scientific paper created in hours
Contributing to the discussion were also Kim Kristoffer Dysthe of the Norwegian Institute of Public Health (NIPH), who told about the time when researchers at the NPHI wanted to attend a scientific conference in Lübeck, Germany, but had forgotten the deadline for the call for papers. They charged AI to draft a scientific paper – that in turn was completed in an hour. With the lack of research into the basic scientific questions being a major concern for the NIPH, they decided in the end not to submit the AI-written paper and not attend the conference.
They see that AI leads to a rise in the number of scientific papers released, but not with an increase in quality. They conclude that they need to work on the human factor, when experienced scientists adopt the new technology and change approach, but also retain quality.