Found 728 bookmarks
Custom sorting
Rosen, D., Oh, Y., Chesebrough, C., Zhang, F. Z., & Kounios, J. (2024). Creative flow as optimized processing: Evidence from brain oscillations during jazz improvisations by expert and non-expert musicians. Neuropsychologia, 108824.
Rosen, D., Oh, Y., Chesebrough, C., Zhang, F. Z., & Kounios, J. (2024). Creative flow as optimized processing: Evidence from brain oscillations during jazz improvisations by expert and non-expert musicians. Neuropsychologia, 108824.
·pdf.sciencedirectassets.com·
Rosen, D., Oh, Y., Chesebrough, C., Zhang, F. Z., & Kounios, J. (2024). Creative flow as optimized processing: Evidence from brain oscillations during jazz improvisations by expert and non-expert musicians. Neuropsychologia, 108824.
Pan, C. A., Yakhmi, S., Iyer, T. P., Strasnick, E., Zhang, A. X., & Bernstein, M. S. (2022). Comparing the perceived legitimacy of content moderation processes: Contractors, algorithms, expert panels, and digital juries. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1-31.
Pan, C. A., Yakhmi, S., Iyer, T. P., Strasnick, E., Zhang, A. X., & Bernstein, M. S. (2022). Comparing the perceived legitimacy of content moderation processes: Contractors, algorithms, expert panels, and digital juries. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1-31.
Digital juries.
·dl.acm.org·
Pan, C. A., Yakhmi, S., Iyer, T. P., Strasnick, E., Zhang, A. X., & Bernstein, M. S. (2022). Comparing the perceived legitimacy of content moderation processes: Contractors, algorithms, expert panels, and digital juries. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1-31.
Gordon, M. L., Lam, M. S., Park, J. S., Patel, K., Hancock, J., Hashimoto, T., & Bernstein, M. S. (2022, April). Jury learning: Integrating dissenting voices into machine learning models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
Gordon, M. L., Lam, M. S., Park, J. S., Patel, K., Hancock, J., Hashimoto, T., & Bernstein, M. S. (2022, April). Jury learning: Integrating dissenting voices into machine learning models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
(How is a juror instructed to eliminate implicit bias? What would be the specifics of a course that changed their minds? This is fairly easy to trigger in practice, e.g. as subtext to invoke irony.)
·dl.acm.org·
Gordon, M. L., Lam, M. S., Park, J. S., Patel, K., Hancock, J., Hashimoto, T., & Bernstein, M. S. (2022, April). Jury learning: Integrating dissenting voices into machine learning models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
Ferreira, R., & Vardi, M. Y. (2021, March). Deep tech ethics: An approach to teaching social justice in computer science. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (pp. 1041-1047).
Ferreira, R., & Vardi, M. Y. (2021, March). Deep tech ethics: An approach to teaching social justice in computer science. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (pp. 1041-1047).
·cs.rice.edu·
Ferreira, R., & Vardi, M. Y. (2021, March). Deep tech ethics: An approach to teaching social justice in computer science. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (pp. 1041-1047).
Defining the scope of AI regulations
Defining the scope of AI regulations

"Here, effectiveness refers to the degree to which a given regulation achieves or progresses towards its objectives. It is worth noting that the concept of effectiveness is highly controversial within legal research,26 but for the purposes of this paper, the debate has no relevant implications."
"Legal definitions must not be under-inclusive. A definition is under-inclusive if cases which should have been included are not included. This is a case of too little regulation." "Some AI definitions are also under-inclusive. For example, systems which do not achieve their goals—like an autonomous vehicle that is unable to reliably identify pedestrians—would be excluded, even though they can pose significant risks. Similarly, the Turing test excludes systems that do not communicate in natural language, even though such systems may need regulation (e.g. autonomous vehicles)." "Relevant risks can not be attributed to a single technical approach. For example, supervised learning is not inherently risky. And if a definition lists many technical approaches, it would likely be over-inclusive." "Not all systems that are applied in a specific context pose the same risks. Many of the risks also depend on the technical approach." "Relevant risks can not be attributed to a certain capability alone. By its very nature, capabilities need to be combined with other elements (‘capability of something)."

·arxiv.org·
Defining the scope of AI regulations