AI FEM

141 bookmarks
Custom sorting
An analysis of unconscious gender bias in academic texts by means of a decision algorithm
An analysis of unconscious gender bias in academic texts by means of a decision algorithm
Inclusive language focuses on using the vocabulary to avoid exclusion or discrimination, specially referred to gender. The task of finding gender bias in written documents must be performed manually, and it is a time-consuming process. Consequently, studying the usage of non-inclusive language on a document, and the impact of different document properties (such as author gender, date of presentation, etc.) on how many non-inclusive instances are found, is quite difficult or even impossible for big datasets. This research analyzes the gender bias in academic texts by analyzing a study corpus of more than 12,000 million words obtained from more than one hundred thousand doctoral theses from Spanish universities. For this purpose, an automated algorithm was developed to evaluate the different characteristics of the document and look for interactions between age, year of publication, gender or the field of knowledge in which the doctoral thesis is framed. The algorithm identified information patterns using a CNN (convolutional neural network) by the creation of a vector representation of the sentences. The results showed evidence that there was a greater bias as the age of the authors increased, who were more likely to use non-inclusive terms; it was concluded that there is a greater awareness of inclusiveness in women than in men, and also that this awareness grows as the candidate is younger. The results showed evidence that the age of the authors increased discrimination, with men being more likely to use non-inclusive terms (up to an index of 23.12), showing that there is a greater awareness of inclusiveness in women than in men in all age ranges (with an average of 14.99), and also that this awareness grows as the candidate is younger (falling down to 13.07). In terms of field of knowledge, the humanities are the most biased (20.97), discarding the subgroup of Linguistics, which has the least bias at all levels (9.90), and the field of science and engineering, which also have the least influence (13.46). Those results support the assumption that the bias in academic texts (doctoral theses) is due to unconscious issues: otherwise, it would not depend on the field, age, gender, and would occur in any field in the same proportion. The innovation provided by this research lies mainly in the ability to detect, within a textual document in Spanish, whether the use of language can be considered non-inclusive, based on a CNN that has been trained in the context of the doctoral thesis. A significant number of documents have been used, using all accessible doctoral theses from Spanish universities of the last 40 years; this dataset is only manageable by data mining systems, so that the training allows identifying the terms within the context effectively and compiling them in a novel dictionary of non-inclusive terms.
·journals.plos.org·
An analysis of unconscious gender bias in academic texts by means of a decision algorithm
Addressing Gender Dimension in a Horizon 2020 Proposal - EMDESK
Addressing Gender Dimension in a Horizon 2020 Proposal - EMDESK
When developing your Horizon 2020 proposal, the question of gender dimension is pertinent and it mirrors the concern of many researchers and proposal writers. This article explains why you need to address the gender dimension in research and innovation and how to do so impactfully.
·emdesk.com·
Addressing Gender Dimension in a Horizon 2020 Proposal - EMDESK
What is the Gender Equality Plan in Horizon Europe?
What is the Gender Equality Plan in Horizon Europe?
The Gender Equality Plan (GEP) in Horizon Europe is a mandatory document for public bodies, higher education establishments, and research organisations
·eufunds.me·
What is the Gender Equality Plan in Horizon Europe?
Ukim gep 2022 2025 mk
Ukim gep 2022 2025 mk

План за родова еднаквост на Универзитетот „Св. Кирил и Методиј“ во Скопје 2022-2025

·ukim.edu.mk·
Ukim gep 2022 2025 mk
GEAR.pdf
GEAR.pdf
·up.raindrop.io·
GEAR.pdf
Gendered Innovations | Stanford University
Gendered Innovations | Stanford University
Gendered Innovations means employing methods of sex and gender analysis as a resource to create new knowledge and stimulate novel design. The term was coined by Londa Schiebinger in 2005. This website features state-of-the-art Methods of Sex and Gender Analysis for basic and applied research. We illustrate how to apply these methods in case studies. Gendered innovations fueled by sophisticated gender methods stimulate the creation of new gender-responsible science and technology, and by doing so enhance the lives of both men and women around the world.
·genderedinnovations.stanford.edu·
Gendered Innovations | Stanford University
Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning | IEEE Conference Publication | IEEE Xplore
Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning | IEEE Conference Publication | IEEE Xplore
Artificial intelligence is increasingly influencing the opinions and behaviour of people in everyday life. However, the over-representation of men in the design of these technologies could quietly undo decades of advances in gender equality. Over centuries, humans developed critical theory to inform decisions and avoid basing them solely on personal experience. However, machine intelligence learns primarily from observing data that it is presented with. While a machine's ability to process large volumes of data may address this in part, if that data is laden with stereotypical concepts of gender, the resulting application of the technology will perpetuate this bias. While some recent studies sought to remove bias from learned algorithms they largely ignore decades of research on how gender ideology is embedded in language. Awareness of this re-search and incorporating it into approaches to machine learning from text would help prevent the generation of biased algorithms. Leading thinkers in the emerging field addressing bias in artificial intelligence are also primarily female, suggesting that those who are potentially affected by bias are more likely to see, understand and attempt to resolve it. Gender balance in machine learning is therefore crucial to prevent algorithms from perpetuating gender ideologies that disadvantage women.
·ieeexplore.ieee.org·
Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning | IEEE Conference Publication | IEEE Xplore
Frequently asked questions: Tech-facilitated gender-based violence | UN Women – Headquarters
Frequently asked questions: Tech-facilitated gender-based violence | UN Women – Headquarters
As digital technology mediates more and more of our daily lives, it is also facilitating new and heightened forms of gender-based violence. Online violence against women and girls, though not a new phenomenon, has escalated rapidly since the onset of COVID-19—with serious implications for women’s safety and well-being. The impacts of such violence extend beyond the digital sphere, posing a significant threat to the exercise of women’s rights both online and off. Learn more about the issue—and what can be done to prevent and respond to it.
·unwomen.org·
Frequently asked questions: Tech-facilitated gender-based violence | UN Women – Headquarters
Gender equality and artificial intelligence in Europe. Addressing direct and indirect impacts of algorithms on gender-based discrimination | ERA Forum
Gender equality and artificial intelligence in Europe. Addressing direct and indirect impacts of algorithms on gender-based discrimination | ERA Forum
ERA Forum - This article assesses whether current European law sufficiently captures gender-based biases and algorithmic discrimination in the context of artificial intelligence (AI) and provides a...
·link.springer.com·
Gender equality and artificial intelligence in Europe. Addressing direct and indirect impacts of algorithms on gender-based discrimination | ERA Forum