In a crowded field of awful companies, one stands out as the worst: @proctorio, which uses digital phrenology to monitor students' faces while they take tests, setting them up for punishment for looking away while thinking, going to the bathroom, or throwing up from anxiety.3/— Cory Doctorow (@doctorow) April 22, 2021
The problem with "moral machines" - Philosopher's Zone
There’s a lot of talk these days about building ethics into artificial intelligence systems. From a philosophical perspective, it’s a daunting challenge – and this has to do with the nature of ethics, which is more than just a set of principles and instructions. Can machines ever really be moral agents?
Inclusive teaching: audio describing your own presentations
In order to be more inclusive as teachers, presenters, speakers, facilitators (and a long list of other things we do in life where we communicate), we need to develop the skill of audio describing our own presentations.
Consent Management Platforms under the GDPR: processors and/or controllers? - Inria
Consent Management Providers (CMPs) provide consent pop-ups that are embedded in ever more websites over time to enable streamlined compliance with the legal requirements for consent mandated by the ePrivacy Directive and the General Data Protection Regulation (GDPR). They implement the standard for consent collection from the Transparency and Consent Framework (TCF) (current version v2.0) proposed by the European branch of the Interactive Advertising Bureau (IAB Europe). Although the IAB’s TCF specifications characterize CMPs as data processors, CMPs factual activities often qualifies them as data controllers instead. Discerning their clear role is crucial since compliance obligations and CMPs liability depend on their accurate characterization. We perform empirical experiments with two major CMP providers in the EU: Quantcast and OneTrust and paired with a legal analysis. We conclude that CMPs process personal data, and we identify three scenarios wherein CMPs are controllers.
EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft
European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — t…
Emotion A.I., affective computing, and artificial emotional intelligence are all fields creating technology to understand, respond to, measure, and simulate human emotions. Hope runs so high for…
Beware technical solutions to non-technical problems
Technical approaches are only one part of the trust relationship between AI and users You may have heard of an AI method called Explainable Artificial Intelligence or XAI. XAI refers to the discipline that aims to make the behaviour of AI models more understandable.
Your 'smart home' is watching – and possibly sharing your data with the police | Technology | The Guardian
Smart-home devices like thermostats and fridges may be too smart for comfort – especially in a country with few laws preventing the sale of digital data to third parties
Our society needs a constructive discourse around ethics in the digital realm, as well as a wide-spread literacy on how to design for ethics in a digitalised environment.
A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI by Sandra Wachter, Brent Mittelstadt :: SSRN
Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private
Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation | International Data Privacy Law | Oxford Academic
Key PointsSince approval of the European Union General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to exp
Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law by Sandra Wachter, Brent Mittelstadt, Chris Russell :: SSRN
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms
Here is what Slack should do: find the people in the org who warned you the DM feature could be abused by bad actors and LISTEN TO THEM NEXT TIME. Find the people who said that was an overreaction or too negative and make sure they understand they were wrong and why.— Laura Klein (@lauraklein) March 25, 2021
In Hungarian, we don’t use he/she there is only one gender pronoun “Ö”. But it’s fascinating when this is fed through Google Translate, the algorithms highlight the biases that are there. Then imagine enacting any kind of change from those biases, encoded into computer code. pic.twitter.com/DygBtaHShU— ρђ๏є๒є Շเςкєɭɭ (@solarpunk_girl) March 20, 2021