Found 11 bookmarks
By relevance
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's latest patent application reveals the firm's ongoing plans to use surveillance against vulnerable individuals. According to BuzzFeed News, a patent was filed in August which describes in detail how the applications of facial recognition can range from governmental to social — like dating and professional networking. Clearview AI's patent claims that people will be able to identify individuals who are unhoused and are drug users by simply accessing the company's face-matching system.
·inputmag.com·
Clearview AI's plan for invasive facial recognition is worse than you think
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive…
·medium.com·
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Councils scrapping use of algorithms in benefit and welfare decisions
Councils scrapping use of algorithms in benefit and welfare decisions
Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms. The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”.
·theguardian.com·
Councils scrapping use of algorithms in benefit and welfare decisions
Meaningful Transparency and (in)visible Algorithms
Meaningful Transparency and (in)visible Algorithms
Can transparency bring accountability to public-sector algorithmic decision-making (ADM) systems? High-profile retractions have taken place against a shift in public sentiment towards greater scepticism and mistrust of ‘black box’ technologies, evidenced in increasing awareness of the possible risks for citizens of the potentially invasive profiling.
·adalovelaceinstitute.org·
Meaningful Transparency and (in)visible Algorithms
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
A Home Office-funded project that used artificial intelligence to predict gun and knife crime was found to be wildly inaccurate. “Basing our arguments on inaccuracy is problematic because the tech deficiencies are solvable through time. Even if the algorithm was set to be 100 percent accurate, there would still be bias in this system.”
·wired.co.uk·
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
Co-op is using facial recognition tech to scan and track shoppers | WIRED UK
Co-op is using facial recognition tech to scan and track shoppers | WIRED UK
Branches of Co-op in the south of England have been using real-time facial recognition cameras to scan shoppers entering stores. § See US based studies on FRT which shows the technology can be unreliable for black people, especially black women.
·wired.co.uk·
Co-op is using facial recognition tech to scan and track shoppers | WIRED UK