"Here, effectiveness refers to the degree to which a given regulation achieves or progresses towards its objectives. It is worth noting that the concept of effectiveness is highly controversial within legal research,26 but for the purposes of this paper, the debate has no relevant implications."
"Legal definitions must not be under-inclusive. A
definition is under-inclusive if cases which should have been included are not included. This is a case of too little regulation."
"Some AI definitions are also under-inclusive. For example, systems which do not achieve their goals—like an autonomous vehicle that is unable to reliably identify pedestrians—would be excluded, even though they can pose significant risks. Similarly, the Turing test excludes systems that do not communicate in natural language, even though such systems may need regulation (e.g. autonomous vehicles)."
"Relevant risks can not be attributed to a single technical approach. For example, supervised learning is not inherently risky. And if a definition lists many technical approaches, it would likely be over-inclusive."
"Not all systems that are applied in a specific context pose the same risks. Many of the risks also depend on the technical approach." "Relevant risks can not be attributed to a certain capability alone. By its very nature, capabilities need to be combined with other elements (‘capability of something)."
Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools
AI Will Not Want to Self-Improve
Open PDF in Browser
Nay, J. J., Karamardian, D., Lawsky, S. B., Tao, W., Bhat, M., Jain, R., ... & Kasai, J. (2024). Large language models as tax attorneys: a case study in legal capabilities emergence. Philosophical Transactions of the Royal Society A, 382(2270), 20230159.
Avery, J. J., Abril, P. S., & del Riego, A. ChatGPT, Esq.: Recasting Unauthorized Practice of Law in the Era of Generative AI. Yale Journal of Law & Technology, 26(1).
Standardized nomenclature for litigational legal prompting in generative language models
Scientific Educations Among U.S. Judges
Social media jurors conceptualizing and analyzing online public engagement in reference to legal cases
(Political candidates that admit to some criticisms may simultaneously attempt to link the opposition to perceived worse ones, e.g. both leading considered aged but showing different effects.)
Gordon, M. L., Lam, M. S., Park, J. S., Patel, K., Hancock, J., Hashimoto, T., & Bernstein, M. S. (2022, April). Jury learning: Integrating dissenting voices into machine learning models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
(How is a juror instructed to eliminate implicit bias? What would be the specifics of a course that changed their minds? This is fairly easy to trigger in practice, e.g. as subtext to invoke irony.)
Defining the scope of AI regulations
Tan, J., Westermann, H., & Benyekhlef, K. (2023). Chatgpt as an artificial lawyer?. Artificial Intelligence for Access to Justice (AI4AJ 2023).
Better Call GPT, Comparing Large Language Models Against Lawyers
Download PDF
Could the destruction of a beloved robot be considered a hate crime? An exploration of the legal and social significance of robot love
Between Humans and Machines: Judicial Interpretation of the Automated Decision-Making Practices in the EU
Open PDF in Browser
Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML
PDF