This June I attended a summer school on at AI governance.
This June I attended a summer school on at AI governance. Throughout the lectures, It was super clear and frustrating that the lecturers with their different backgrounds (law, international relations, AI governance, etc.) push only the narrative of Big tech companies about superintelligence and its dangers referring only to research done by Big tech.
It was equally surprising, frustrating and disappointing, the the lecturers never heard of Timnit Gebru, Emily M. Bender or Alex Hanna, Ph.D. In fact they did not seem to be familiar with the research on critical AI studies. I was looked crazy as I asked them how come they don't know these researchers and they don't include their research in their understanding of AI and AI capabilities. I could not understand how come? why?.
Today as I was reading chapter 36 In the "Handbook of critical studies of Artificial intelligence" under the title "Barriers to regulating AI: critical observations from a fractured field", by Ashlin Lee, Will Orr, Walter G.Johnson, Jenna Imad Hard and Kathryn Henne, I finally understood why.
The authors argue that since nation states want to support the growth of AI, they decided to defer regulatory responsibilities to external stakeholder groups including think tanks and corporations (My summer school was organised by a think tank). This process is called hybridising governance. With this type of governance, these groups are allowed to define the formal and informal regulations for the state with little direction. The authors go on to explain that "This [type of governance] creates a disorderly regulatory environment that cement power among those already invested in AI while making it difficult for those outside these privileges groups [researchers on critical AI and people harmed by AI] to contribute their knowledge and experience." The authors then go on and explain that "External stakeholders stand to benefit from hybridising regulation of AI, with the public potentially less well served by this arrangement."
This explains why AI governance, in its current format, ultimately overly focuses on Ethical AI guidelines as a mechanism of self-regulation over enforceable regulations. This also explains why the heads of the AI governance school were pushing for the same narrative that Big tech companies keeps repeating that we need to regulate the scary futuristic super intelligence rather than regulating the currently harmful AI systems.
I upload here this section of the chapter for you to read, it is very interesting.