Found 90 bookmarks
Custom sorting
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further. Researchers experimenting with GPT-3, the AI text-generation model, found that it is not ready to replace human respondents in the chatbox. Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves.
·artificialintelligence-news.com·
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
The Shuri Network Achievements Summary 2020
The Shuri Network Achievements Summary 2020
How many times have you seen an all-female and black and ethnic minority (BME) panel talking about technology? For many people their first time would have been the Shuri Network launch last July. The Shuri Network was launched in 2019 to support women of colour in NHS digital health develop the skills and confidence to progress into senior leadership positions and help NHS leadership teams more closely represent the diversity of their workforce.
·up.raindrop.io·
The Shuri Network Achievements Summary 2020
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's latest patent application reveals the firm's ongoing plans to use surveillance against vulnerable individuals. According to BuzzFeed News, a patent was filed in August which describes in detail how the applications of facial recognition can range from governmental to social — like dating and professional networking. Clearview AI's patent claims that people will be able to identify individuals who are unhoused and are drug users by simply accessing the company's face-matching system.
·inputmag.com·
Clearview AI's plan for invasive facial recognition is worse than you think
Artificial Intelligence in Hiring: Assessing Impacts on Equality
Artificial Intelligence in Hiring: Assessing Impacts on Equality
The use of artificial intelligence (AI) presents risks to equality, potentially embedding bias and discrimination. Auditing tools are often promised as a solution. However our new research, which examines tools for auditing AI used in recruitment, finds these tools are often inadequate in ensuring compliance with UK Equality Law, good governance and best practice. We argue in this report that a more comprehensive approach than technical auditing is needed to safeguard equality in the use of AI for hiring, which shapes access to work. Here, we present first steps which could be taken to achieve this. We also publish a prototype AI Equality Impact Assessment which we plan to develop and pilot.
·up.raindrop.io·
Artificial Intelligence in Hiring: Assessing Impacts on Equality
The coming war on the hidden algorithms that trap people in poverty
The coming war on the hidden algorithms that trap people in poverty
A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services. Increasingly, the fight over a client’s eligibility now involves some kind of algorithm. “In some cases, it probably should just be shut down because there’s no way to make it equitable,”
·technologyreview.com·
The coming war on the hidden algorithms that trap people in poverty
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive…
·medium.com·
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Councils scrapping use of algorithms in benefit and welfare decisions
Councils scrapping use of algorithms in benefit and welfare decisions
Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms. The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”.
·theguardian.com·
Councils scrapping use of algorithms in benefit and welfare decisions
Meaningful Transparency and (in)visible Algorithms
Meaningful Transparency and (in)visible Algorithms
Can transparency bring accountability to public-sector algorithmic decision-making (ADM) systems? High-profile retractions have taken place against a shift in public sentiment towards greater scepticism and mistrust of ‘black box’ technologies, evidenced in increasing awareness of the possible risks for citizens of the potentially invasive profiling.
·adalovelaceinstitute.org·
Meaningful Transparency and (in)visible Algorithms
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
A Home Office-funded project that used artificial intelligence to predict gun and knife crime was found to be wildly inaccurate. “Basing our arguments on inaccuracy is problematic because the tech deficiencies are solvable through time. Even if the algorithm was set to be 100 percent accurate, there would still be bias in this system.”
·wired.co.uk·
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
Algorithmic Colonisation of Africa - Abeba Birhane
Algorithmic Colonisation of Africa - Abeba Birhane
Colonialism in the age of Artificial Intelligence takes the form of “state-of-the-art algorithms” and “AI driven solutions” unsuited to African problems, and hinders the development of local products, leaving the continent dependent on Western software and infrastructure.
·theelephant.info·
Algorithmic Colonisation of Africa - Abeba Birhane
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
As we hand over decision-making regarding social issues to automated systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by the profit incentive, but we are also handing over moral and ethical questions to the corporate world, argues ABEBA BIRHANE
·theelephant.info·
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
Black programmers and technologists who inspire us
Black programmers and technologists who inspire us
This year, in honor of Black History Month, the Codecademy Team is celebrating Black leaders that are working to build a more inclusive, more welcoming, and more diverse tech industry. It's important to celebrate Black people in all our roles and diversity. For UK Black History Month (BHM), we're keen to see similar profiling of technologists who want to raise their visibility, so we can celebrate their work.
·news.codecademy.com·
Black programmers and technologists who inspire us
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Automated decision making threatens to disproportionately impact society’s most vulnerable communities living at the intersection of economic and social marginalization. The report discusses
tgyateng69·eprints.lse.ac.uk·
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe