"Here, effectiveness refers to the degree to which a given regulation achieves or progresses towards its objectives. It is worth noting that the concept of effectiveness is highly controversial within legal research,26 but for the purposes of this paper, the debate has no relevant implications."
"Legal definitions must not be under-inclusive. A
definition is under-inclusive if cases which should have been included are not included. This is a case of too little regulation."
"Some AI definitions are also under-inclusive. For example, systems which do not achieve their goals—like an autonomous vehicle that is unable to reliably identify pedestrians—would be excluded, even though they can pose significant risks. Similarly, the Turing test excludes systems that do not communicate in natural language, even though such systems may need regulation (e.g. autonomous vehicles)."
"Relevant risks can not be attributed to a single technical approach. For example, supervised learning is not inherently risky. And if a definition lists many technical approaches, it would likely be over-inclusive."
"Not all systems that are applied in a specific context pose the same risks. Many of the risks also depend on the technical approach." "Relevant risks can not be attributed to a certain capability alone. By its very nature, capabilities need to be combined with other elements (‘capability of something)."
Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe?
(Conversely, substrates are like Fermi. Where are they found?)
Cultural Bias in Explainable AI Research: A Systematic Analysis | Journal of Artificial Intelligence Research
Henry Shevlin, All too human? Identifying and mitigating ethical risks of Social AI - PhilPapers
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International journal of educational technology in higher education, 20(1), 38.
Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk
Defining the scope of AI regulations
Towards an effective transnational regulation of AI
MAMBA and State Space Models explained | SSM explained
Computing Power and the Governance of Artificial Intelligence
Tan, J., Westermann, H., & Benyekhlef, K. (2023). Chatgpt as an artificial lawyer?. Artificial Intelligence for Access to Justice (AI4AJ 2023).
Trends in AI — February 2024
Applying Machine Learning to Increase Efficiency and Accuracy of Meta-Analytic Review
Anniversary AI reflections
Black-Box Access is Insufficient for Rigorous AI Audits
Download PDF
Serious Games and AI: Challenges and Opportunities for Computational Social Science
Download PDF
The Ethics of AI in Games
Download PDF
Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?
access the article
Self-Rewarding Language Models
Download PDF
10 Predictions and 10 Papers for the New Year in AI // Trends in AI — January 2024
Thousands of ai authors on the future of ai
Turing's Test, a Beautiful Thought Experiment
Download PDF
Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent? | Cambridge Quarterly of Healthcare Ethics | Cambridge Core
Development of Deep Ensembles to Screen for Autism and Symptom Severity
Enlarging the model of the human at the heart of human-centered AI: A social self-determination model of AI system impact
AI Policy Briefs - MIT Schwarzman College of Computing
Dell'Acqua, F., and others. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, (24-013).
PsyArXiv Preprints | A Psychological Model Predicts Fears about Artificial Intelligence across 20 Countries and 6 Domains of Application
AI Adoption in America: Who, What, and Where
Managing AI Risks in an Era of Rapid Progress
2308