KPMG lodges complaint after AI-generated material was used to implicate them in non-existent scandals
Senate inquiry into Australian consultancy industry concedes its integrity has been undermined by reliance on case studies generated by Google Bard AI tool
AI Cameras Took Over One Small American Town. Now They're Everywhere
Hundreds of documents obtained by 404 Media show how Fusus, a system for linking a town’s security cameras into one central hub and adding AI to them, has spread across the country.
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's latest patent application reveals the firm's ongoing plans to use surveillance against vulnerable individuals. According to BuzzFeed News, a patent was filed in August which describes in detail how the applications of facial recognition can range from governmental to social — like dating and professional networking. Clearview AI's patent claims that people will be able to identify individuals who are unhoused and are drug users by simply accessing the company's face-matching system.
Councils scrapping use of algorithms in benefit and welfare decisions
Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms. The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”.
Meaningful Transparency and (in)visible Algorithms
Can transparency bring accountability to public-sector algorithmic decision-making (ADM) systems? High-profile retractions have taken place against a shift in public sentiment towards greater scepticism and mistrust of ‘black box’ technologies, evidenced in increasing awareness of the possible risks for citizens of the potentially invasive profiling.
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
A Home Office-funded project that used artificial intelligence to predict gun and knife crime was found to be wildly inaccurate. “Basing our arguments on inaccuracy is problematic because the tech deficiencies are solvable through time. Even if the algorithm was set to be 100 percent accurate, there would still be bias in this system.”
Co-op is using facial recognition tech to scan and track shoppers | WIRED UK
Branches of Co-op in the south of England have been using real-time facial recognition cameras to scan shoppers entering stores.
§
See US based studies on FRT which shows the technology can be unreliable for black people, especially black women.