Digital Ethics

Digital Ethics

3749 bookmarks
Custom sorting
it’s the interface
it’s the interface
A whole lot of people – including computer scientists who should know better and academics who are usually thoughtful – are caught up in fanciful, magical beliefs about chatbots. Any su…
·scatter.wordpress.com·
it’s the interface
Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.
Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.
Ignoring their own well-publicized calls to regulate AI development and to pause implementation of its applications, major technology companies such as Google, Microsoft, and Meta are racing to fend off regulation and integrate artificial intelligence (AI) into their platforms. The weight of the available evidence suggests that the current wholesale adoption of unregulated AI applications in schools poses a grave danger to democratic civil society and to individual freedom and liberty. Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems. This policy brief explores the harms likely if lawmakers and others do not step in with carefully considered measures to prevent these extensive risks. The authors urge school leaders to pause the adoption of AI applications until policymakers have had sufficient time to thoroughly educate themselves and develop legislation and policies ensuring effective public oversight and control of school applications. Suggested Citation: Williamson, B., Molnar, A., & Boninger, F. (2024). Time for a pause: Without effective public oversight, AI in schools will do more harm than good. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/ai
·nepc.colorado.edu·
Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.
Meta Stole Millions of Books to Train AI—Then Called Them Worthless. Now They’re Suing to Silence One.
Meta Stole Millions of Books to Train AI—Then Called Them Worthless. Now They’re Suing to Silence One.
Meta stole millions of books to build its AI empire—then declared them worthless, profited from every word, moved to silence the whistleblower, and is now trying to outlaw the very theft it perfected. Meta’s Great AI Heist Meta scraped over 7 million pirated books to train its LLaMA models—including
·linkedin.com·
Meta Stole Millions of Books to Train AI—Then Called Them Worthless. Now They’re Suing to Silence One.
AI Agents Are Here. What Now?
AI Agents Are Here. What Now?
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
·huggingface.co·
AI Agents Are Here. What Now?
The real problem of writing with AI - Fast Company
The real problem of writing with AI - Fast Company
Whether it’s a private email or a public LinkedIn post, we need to admit that using AI to write is simply poor etiquette.
·fastcompany.com·
The real problem of writing with AI - Fast Company
Big tech’s water-guzzling data centers are draining some of the world’s driest regions
Big tech’s water-guzzling data centers are draining some of the world’s driest regions
Amazon, Google, and Microsoft are expanding data centers in areas already struggling with drought, raising concerns about their use of local water supplies for cooling massive server farms.Luke Barratt and Costanza Gambarini report for The Guardian.In short:The three largest cloud companies are buil...
·dailyclimate.org·
Big tech’s water-guzzling data centers are draining some of the world’s driest regions
Manifesto for a Humane Web
Manifesto for a Humane Web
We need to build a better web. A web by and for humans.
·humanewebmanifesto.com·
Manifesto for a Humane Web
As ‘Bot’ Students Continue to Flood In, Community Colleges Struggle to Respond
As ‘Bot’ Students Continue to Flood In, Community Colleges Struggle to Respond
Community colleges have been dealing with an unprecedented phenomenon: fake students bent on stealing financial aid funds. While it has caused chaos at many colleges, some Southwestern faculty feel their leaders haven’t done enough to curb the crisis.
·voiceofsandiego.org·
As ‘Bot’ Students Continue to Flood In, Community Colleges Struggle to Respond
#ai #tech | Maria Sukhareva | 69 comments
#ai #tech | Maria Sukhareva | 69 comments
This viral trend of asking ChatGPT to generate a map of Europe is the perfect visual example of what it means to use a large language model for complex topics. If you have no idea about geography, it’s an ok map. Next time you’re tempted to rely on a LLM as a lawyer, doctor, or scientist, think of that map as that’s the kind of output you’re getting. #AI #Tech | 69 comments on LinkedIn
·linkedin.com·
#ai #tech | Maria Sukhareva | 69 comments
Scams | Parven Kaur | 22 comments
Scams | Parven Kaur | 22 comments
You receive a photo of your child bruised and distressed. Moments later, a voice message arrives. It sounds exactly like them. They’re crying. Begging for… | 22 comments on LinkedIn
·linkedin.com·
Scams | Parven Kaur | 22 comments
The False Intention Economy: How AI Systems Are Replacing Human Will with Modeled Behavior
The False Intention Economy: How AI Systems Are Replacing Human Will with Modeled Behavior
Author’s Note I’ve spent years helping large organizations make sense of their future, not just in terms of emerging technologies but also of the structural shifts those technologies tend to demand. My work has often lived at the intersection of strategy and systems architecture, where the real chal
·linkedin.com·
The False Intention Economy: How AI Systems Are Replacing Human Will with Modeled Behavior
Unreliable Pedestrian Detection and Driver Alerting in Intelligent Vehicles
Unreliable Pedestrian Detection and Driver Alerting in Intelligent Vehicles
Vehicles with advanced driving assist systems that automatically steer, accelerate and brake are popular, but associated with increased driver distraction. This distraction, coupled with unreliable autonomous system performance, leads to vehicles that may be at higher risk for striking pedestrians. To this end, this study tested three consumer vehicles in two different model classes in a pedestrian crossing scenario. In 120 trials, one model never detected the pedestrian, nor alerted the driver. In 123 trials, the other model vehicles almost always detected the pedestrian, but in 35% of trials, alerted the driver too late. These cars were not consistent internally or with one another in pedestrian detections and responses, and only sparingly sounded any warnings. These intelligent vehicles also detected the pedestrian earlier if there were no established lane lines, suggesting that in well-marked areas, typically the case in for established crossings, pedestrians may be at increased risk of a possible conflict. This research demonstrates that artificial intelligence can lead to unreliable vehicle behaviors and warnings in pedestrian detection, potentially catching drivers off guard. These results further indicate industry needs to do more testing of intelligent systems, regulators should reevaluate the self-certification approval process, and that more fundamental work is needed in academia around the performance and quality of technologies with embedded neural networks.
·ieeexplore.ieee.org·
Unreliable Pedestrian Detection and Driver Alerting in Intelligent Vehicles
09664424
09664424
·ieeexplore.ieee.org·
09664424
Fairness FAQ
Fairness FAQ
·angelina-wang.github.io·
Fairness FAQ
Just Data, Until the Mirror Stared Back
Just Data, Until the Mirror Stared Back
My medical data has been breached four times in the last three years. It wasn’t stolen in some dramatic hack but quietly lost [insert: stolen, intercepted, acquired] by the systems that were supposed to protect it, healthcare providers, business associates, and digital services I never opted into bu
·linkedin.com·
Just Data, Until the Mirror Stared Back
A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.
A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.
The CEO of 23andMe has recently said she’d consider selling the genetic genealogy testing company–and with it, the sensitive DNA data that it’s collected, and stored, from many of its 15 million customers. Customers and their relatives are rightly concerned.
·eff.org·
A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.
John Skiles Skinner: "I helped build a government AI system. DOGE fired…" - carhenge.club
John Skiles Skinner: "I helped build a government AI system. DOGE fired…" - carhenge.club
I helped build a government AI system. DOGE fired me, rolled the AI out to the whole agency, and implied the AI can do my job and the jobs of the others they've fired. It can't. But, what DOGE accidentally revealed about themselves in the process is fascinating. 🧵
·carhenge.club·
John Skiles Skinner: "I helped build a government AI system. DOGE fired…" - carhenge.club