"Here, effectiveness refers to the degree to which a given regulation achieves or progresses towards its objectives. It is worth noting that the concept of effectiveness is highly controversial within legal research,26 but for the purposes of this paper, the debate has no relevant implications."
"Legal definitions must not be under-inclusive. A
definition is under-inclusive if cases which should have been included are not included. This is a case of too little regulation."
"Some AI definitions are also under-inclusive. For example, systems which do not achieve their goals—like an autonomous vehicle that is unable to reliably identify pedestrians—would be excluded, even though they can pose significant risks. Similarly, the Turing test excludes systems that do not communicate in natural language, even though such systems may need regulation (e.g. autonomous vehicles)."
"Relevant risks can not be attributed to a single technical approach. For example, supervised learning is not inherently risky. And if a definition lists many technical approaches, it would likely be over-inclusive."
"Not all systems that are applied in a specific context pose the same risks. Many of the risks also depend on the technical approach." "Relevant risks can not be attributed to a certain capability alone. By its very nature, capabilities need to be combined with other elements (‘capability of something)."
Defining the scope of AI regulations
Computing Power and the Governance of Artificial Intelligence
Tan, J., Westermann, H., & Benyekhlef, K. (2023). Chatgpt as an artificial lawyer?. Artificial Intelligence for Access to Justice (AI4AJ 2023).
Applying Machine Learning to Increase Efficiency and Accuracy of Meta-Analytic Review
Anniversary AI reflections
Black-Box Access is Insufficient for Rigorous AI Audits
Download PDF
Serious Games and AI: Challenges and Opportunities for Computational Social Science
Download PDF
The Ethics of AI in Games
Download PDF
Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?
access the article
Self-Rewarding Language Models
Download PDF
Thousands of ai authors on the future of ai
Turing's Test, a Beautiful Thought Experiment
Download PDF
Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent? | Cambridge Quarterly of Healthcare Ethics | Cambridge Core
Enlarging the model of the human at the heart of human-centered AI: A social self-determination model of AI system impact
Dell'Acqua, F., and others. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, (24-013).
PsyArXiv Preprints | A Psychological Model Predicts Fears about Artificial Intelligence across 20 Countries and 6 Domains of Application
AI Adoption in America: Who, What, and Where
Managing AI Risks in an Era of Rapid Progress
2308
2308
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Explainable Goal-driven Agents and Robots - A Comprehensive Review | ACM Computing Surveys
Schitt, L. (2021). Mapping global AI governance: a nascent regime in a fragmented landscape
Feldstein, S. (2019). How Artificial Intelligence is Reshaping Repression
Assessing the impact of regulations and standards on innovation in the field of AI
Good, I. J. (1966). Speculations concerning the first ultraintelligent machine. In Advances in computers (Vol. 6, pp. 31-88).
Model evaluation for extreme risks
PDF
Generative AI at Work
Characterizing Manipulation from AI Systems
Fair and Efficient Allocation of Scarce Resources Based on Predicted Outcomes: Implications for Homeless Service Delivery | Journal of Artificial Intelligence Research