Е-Ресурси Акција Здруженска/Институт за родови студии
How research managers are using AI to get ahead
For those at the interface of funding organizations and the scientific community, platforms such as ChatGPT can tackle menial tasks and free up time for relationship-building work such as coaching and mentoring.
We introduce GAIA, a benchmark for General AI Assistants that, if solved, would represent a milestone in AI research. GAIA proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency. GAIA questions are conceptually simple for humans yet challenging for most advanced AIs: we show that human respondents obtain 92\% vs. 15\% for GPT-4 equipped with plugins. This notable performance disparity contrasts with the recent trend of LLMs outperforming humans on tasks requiring professional skills in e.g. law or chemistry. GAIA's philosophy departs from the current trend in AI benchmarks suggesting to target tasks that are ever more difficult for humans. We posit that the advent of Artificial General Intelligence (AGI) hinges on a system's capability to exhibit similar robustness as the average human does on such questions. Using GAIA's methodology, we devise 466 questions and their answer. We release our questions while retaining answers to 300 of them to power a leader-board available at https://huggingface.co/gaia-benchmark.
AI Chatbot that Assesses
and Grades Your Students
at Scale!
Create AI Chatbot based on your instructions, resources, and rubrics, while protecting student data.
AI Chatbot that Assesses
and Grades Your Students at Scale!Create AI Chatbot based on your instructions, resources, and rubrics, while protecting student data.
Webinars Introduction to Gender Equality Plans This webinar is part of the GE Academy capacity building program. It explores the concept of institutional change for...
11 Recommended Resources on Anti-Gender Bias Training | GenPORT
Preconceptions based on gender stereotypes are still present when it comes to the assessment of men and women in research. On account of the unconscious application of biased societal patterns, such processes may occur unintentionally and without awareness.
An analysis of unconscious gender bias in academic texts by means of a decision algorithm
Inclusive language focuses on using the vocabulary to avoid exclusion or discrimination, specially referred to gender. The task of finding gender bias in written documents must be performed manually, and it is a time-consuming process. Consequently, studying the usage of non-inclusive language on a document, and the impact of different document properties (such as author gender, date of presentation, etc.) on how many non-inclusive instances are found, is quite difficult or even impossible for big datasets. This research analyzes the gender bias in academic texts by analyzing a study corpus of more than 12,000 million words obtained from more than one hundred thousand doctoral theses from Spanish universities. For this purpose, an automated algorithm was developed to evaluate the different characteristics of the document and look for interactions between age, year of publication, gender or the field of knowledge in which the doctoral thesis is framed. The algorithm identified information patterns using a CNN (convolutional neural network) by the creation of a vector representation of the sentences. The results showed evidence that there was a greater bias as the age of the authors increased, who were more likely to use non-inclusive terms; it was concluded that there is a greater awareness of inclusiveness in women than in men, and also that this awareness grows as the candidate is younger. The results showed evidence that the age of the authors increased discrimination, with men being more likely to use non-inclusive terms (up to an index of 23.12), showing that there is a greater awareness of inclusiveness in women than in men in all age ranges (with an average of 14.99), and also that this awareness grows as the candidate is younger (falling down to 13.07). In terms of field of knowledge, the humanities are the most biased (20.97), discarding the subgroup of Linguistics, which has the least bias at all levels (9.90), and the field of science and engineering, which also have the least influence (13.46). Those results support the assumption that the bias in academic texts (doctoral theses) is due to unconscious issues: otherwise, it would not depend on the field, age, gender, and would occur in any field in the same proportion. The innovation provided by this research lies mainly in the ability to detect, within a textual document in Spanish, whether the use of language can be considered non-inclusive, based on a CNN that has been trained in the context of the doctoral thesis. A significant number of documents have been used, using all accessible doctoral theses from Spanish universities of the last 40 years; this dataset is only manageable by data mining systems, so that the training allows identifying the terms within the context effectively and compiling them in a novel dictionary of non-inclusive terms.
Addressing Gender Dimension in a Horizon 2020 Proposal - EMDESK
When developing your Horizon 2020 proposal, the question of gender dimension is pertinent and it mirrors the concern of many researchers and proposal writers. This article explains why you need to address the gender dimension in research and innovation and how to do so impactfully.
What is the Gender Equality Plan in Horizon Europe?
The Gender Equality Plan (GEP) in Horizon Europe is a mandatory document for public bodies, higher education establishments, and research organisations
Gendered Innovations means employing methods of sex and gender analysis as a resource to create new knowledge and stimulate novel design. The term was coined by Londa Schiebinger in 2005. This website features state-of-the-art Methods of Sex and Gender Analysis for basic and applied research. We illustrate how to apply these methods in case studies. Gendered innovations fueled by sophisticated gender methods stimulate the creation of new gender-responsible science and technology, and by doing so enhance the lives of both men and women around the world.
Frequently asked questions: Tech-facilitated gender-based violence | UN Women – Headquarters
As digital technology mediates more and more of our daily lives, it is also facilitating new and heightened forms of gender-based violence. Online violence against women and girls, though not a new phenomenon, has escalated rapidly since the onset of COVID-19—with serious implications for women’s safety and well-being. The impacts of such violence extend beyond the digital sphere, posing a significant threat to the exercise of women’s rights both online and off. Learn more about the issue—and what can be done to prevent and respond to it.
Gender equality and artificial intelligence in Europe. Addressing direct and indirect impacts of algorithms on gender-based discrimination | ERA Forum
ERA Forum - This article assesses whether current European law sufficiently captures gender-based biases and algorithmic discrimination in the context of artificial intelligence (AI) and provides a...
Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning | IEEE Conference Publication | IEEE Xplore
Artificial intelligence is increasingly influencing the opinions and behaviour of people in everyday life. However, the over-representation of men in the design of these technologies could quietly undo decades of advances in gender equality. Over centuries, humans developed critical theory to inform decisions and avoid basing them solely on personal experience. However, machine intelligence learns primarily from observing data that it is presented with. While a machine's ability to process large volumes of data may address this in part, if that data is laden with stereotypical concepts of gender, the resulting application of the technology will perpetuate this bias. While some recent studies sought to remove bias from learned algorithms they largely ignore decades of research on how gender ideology is embedded in language. Awareness of this re-search and incorporating it into approaches to machine learning from text would help prevent the generation of biased algorithms. Leading thinkers in the emerging field addressing bias in artificial intelligence are also primarily female, suggesting that those who are potentially affected by bias are more likely to see, understand and attempt to resolve it. Gender balance in machine learning is therefore crucial to prevent algorithms from perpetuating gender ideologies that disadvantage women.