Cyber_Ethik

Cyber_Ethik

Do We Collaborate With What We Design?
Do We Collaborate With What We Design?
Der Artikel untersucht die Verwendung von Begriffen wie "Kollaboration" und "Kollegen" zur Beschreibung von Interaktionen zwischen Menschen und künstlichen Intelligenzsystemen. Es wird argumentiert, dass diese Begriffe nicht angemessen sind, um die Beziehungen zwischen Menschen und Maschinen zu erfassen. Die Autoren schlagen vor, den Begriff "gemeinsames Handeln" zu verwenden, um die Zusammenarbeit zwischen Menschen und Maschinen zu beschreiben. Es wird betont, dass Maschinen zwar über Autonomie bei der Wahl der Mittel zur Erreichung von Zielen verfügen können, aber die Ziele von menschlichen Akteuren festgelegt werden. Der Artikel weist darauf hin, dass die Verwendung von Begriffen wie "Autonomie" und "Kollaboration" oft zu Missverständnissen führt und dass es wichtig ist, angemessene Begriffe zu verwenden, um die Beziehung zwischen Menschen und Maschinen zu beschreiben. Schließlich wird argumentiert, dass die Verwendung von anthropomorphen Begriffen zur Beschreibung von Maschinen dazu führen kann, dass ihre tatsächlichen Fähigkeiten und Rollen nicht angemessen erfasst werden.
·onlinelibrary.wiley.com·
Do We Collaborate With What We Design?
Model Cards for Model Reporting
Model Cards for Model Reporting
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
·arxiv.org·
Model Cards for Model Reporting
Datasheets for Datasets
Datasheets for Datasets
The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains. To address this gap, we propose datasheets for datasets. In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating characteristics, test results, recommended uses, and other information. By analogy, we propose that every dataset be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on. Datasheets for datasets will facilitate better communication between dataset creators and dataset consumers, and encourage the machine learning community to prioritize transparency and accountability.
·arxiv.org·
Datasheets for Datasets
NIST Special Publication (SP) 800-30 Rev. 1, Guide for Conducting Risk Assessments
NIST Special Publication (SP) 800-30 Rev. 1, Guide for Conducting Risk Assessments
The purpose of Special Publication 800-30 is to provide guidance for conducting risk assessments of federal information systems and organizations, amplifying the guidance in Special Publication 800-39. Risk assessments, carried out at all three tiers in the risk management hierarchy, are part of an overall risk management process—providing senior leaders/executives with the information needed to determine appropriate courses of action in response to identified risks.
·csrc.nist.gov·
NIST Special Publication (SP) 800-30 Rev. 1, Guide for Conducting Risk Assessments
Ethics guidelines for trustworthy AI
Ethics guidelines for trustworthy AI
On 8 April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. This followed the publication of the guidelines' first draft in December 2018 on which more than 500 comments were received through an open consultation.
·digital-strategy.ec.europa.eu·
Ethics guidelines for trustworthy AI
Warum Roboter Moral brauchen I Was kommt danach?
Warum Roboter Moral brauchen I Was kommt danach?
Wir befinden uns auf dem Weg in eine digitale Gesellschaft. Auch der Mensch wird sich wandeln. Neben natürlichen sind auch hybride oder synthetische Menschen denkbar. Die Frage, was ein Mensch überhaupt ist, wird immer wieder neu gestellt. Wir werden eine Ethik für intelligente künstliche System brauchen, wir brauchen eine Moral für Roboter. Harald Russegger erzählt unter dem Motto „Crossing Uncanny Valley“ bei dieser Montagsrunde, wo sich diese Fragen schon heute konkret stellen und welche Probleme und Chancen auf uns warten. Harald Russegger ist Universitäts-Dozent und Aktivist für einen bewussten Umgang mit neuen Technologien. Mehr Informationen: https://jungk-bibliothek.org/
·youtube.com·
Warum Roboter Moral brauchen I Was kommt danach?