A whole lot of people – including computer scientists who should know better and academics who are usually thoughtful – are caught up in fanciful, magical beliefs about chatbots. Any su…
Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good.
Ignoring their own well-publicized calls to regulate AI development and to pause implementation of its applications, major technology companies such as Google, Microsoft, and Meta are racing to fend off regulation and integrate artificial intelligence (AI) into their platforms. The weight of the available evidence suggests that the current wholesale adoption of unregulated AI applications in schools poses a grave danger to democratic civil society and to individual freedom and liberty. Years of warnings and precedents have highlighted the risks posed by the widespread use of pre-AI digital technologies in education, which have obscured decision-making and enabled student data exploitation. Without effective public oversight, the introduction of opaque and unproven AI systems and applications will likely exacerbate these problems. This policy brief explores the harms likely if lawmakers and others do not step in with carefully considered measures to prevent these extensive risks. The authors urge school leaders to pause the adoption of AI applications until policymakers have had sufficient time to thoroughly educate themselves and develop legislation and policies ensuring effective public oversight and control of school applications. Suggested Citation: Williamson, B., Molnar, A., & Boninger, F. (2024). Time for a pause: Without effective public oversight, AI in schools will do more harm than good. Boulder, CO: National Education Policy Center. Retrieved [date] from http://nepc.colorado.edu/publication/ai
Meta Stole Millions of Books to Train AI—Then Called Them Worthless. Now They’re Suing to Silence One.
Meta stole millions of books to build its AI empire—then declared them worthless, profited from every word, moved to silence the whistleblower, and is now trying to outlaw the very theft it perfected. Meta’s Great AI Heist Meta scraped over 7 million pirated books to train its LLaMA models—including
I Tested The AI That Calls Your Elderly Parents If You Can't Be Bothered
inTouch says on its website "Busy life? You can’t call your parent every day—but we can." My own mum said she would feel terrible if her child used it.
LinkedIn’s AI action figure fad is ‘obviously unsustainable,’ warns UK tech mogul
If you’ve been scrolling social media over the past week, you may have noticed miniature action figure versions of friends, family, or colleagues neatly wrapped in a blister pack.
These ...
Big tech’s water-guzzling data centers are draining some of the world’s driest regions
Amazon, Google, and Microsoft are expanding data centers in areas already struggling with drought, raising concerns about their use of local water supplies for cooling massive server farms.Luke Barratt and Costanza Gambarini report for The Guardian.In short:The three largest cloud companies are buil...
As ‘Bot’ Students Continue to Flood In, Community Colleges Struggle to Respond
Community colleges have been dealing with an unprecedented phenomenon: fake students bent on stealing financial aid funds. While it has caused chaos at many colleges, some Southwestern faculty feel their leaders haven’t done enough to curb the crisis.
This viral trend of asking ChatGPT to generate a map of Europe is the perfect visual example of what it means to use a large language model for complex topics. If you have no idea about geography, it’s an ok map. Next time you’re tempted to rely on a LLM as a lawyer, doctor, or scientist, think of that map as that’s the kind of output you’re getting.
#AI #Tech | 69 comments on LinkedIn
AI in the enterprise is failing over twice as fast in 2025 as it was in 2024
S&P Global Market Intelligence ran a survey last month, “Voice of the Enterprise: AI & Machine Learning, Use Cases 2025.” They spoke to 1,006 businesses in Europe and North America. [Teleco…
You receive a photo of your child bruised and distressed. Moments later, a voice message arrives. It sounds exactly like them. They’re crying. Begging for… | 22 comments on LinkedIn
The False Intention Economy: How AI Systems Are Replacing Human Will with Modeled Behavior
Author’s Note I’ve spent years helping large organizations make sense of their future, not just in terms of emerging technologies but also of the structural shifts those technologies tend to demand. My work has often lived at the intersection of strategy and systems architecture, where the real chal
African Intelligence vs Generative Artificial Intelligence | Wakanyi Hoffman
Sustainable AI Workshop 2024: AI Ethics from the Majority World: Reconstructing the Global Debate through Decolonial Lenses🔸 ABOUT THIS VIDEO 🔸 Speaker: Wa...
Unreliable Pedestrian Detection and Driver Alerting in Intelligent Vehicles
Vehicles with advanced driving assist systems that automatically steer, accelerate and brake are popular, but associated with increased driver distraction. This distraction, coupled with unreliable autonomous system performance, leads to vehicles that may be at higher risk for striking pedestrians. To this end, this study tested three consumer vehicles in two different model classes in a pedestrian crossing scenario. In 120 trials, one model never detected the pedestrian, nor alerted the driver. In 123 trials, the other model vehicles almost always detected the pedestrian, but in 35% of trials, alerted the driver too late. These cars were not consistent internally or with one another in pedestrian detections and responses, and only sparingly sounded any warnings. These intelligent vehicles also detected the pedestrian earlier if there were no established lane lines, suggesting that in well-marked areas, typically the case in for established crossings, pedestrians may be at increased risk of a possible conflict. This research demonstrates that artificial intelligence can lead to unreliable vehicle behaviors and warnings in pedestrian detection, potentially catching drivers off guard. These results further indicate industry needs to do more testing of intelligent systems, regulators should reevaluate the self-certification approval process, and that more fundamental work is needed in academia around the performance and quality of technologies with embedded neural networks.
My medical data has been breached four times in the last three years. It wasn’t stolen in some dramatic hack but quietly lost [insert: stolen, intercepted, acquired] by the systems that were supposed to protect it, healthcare providers, business associates, and digital services I never opted into bu
A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.
The CEO of 23andMe has recently said she’d consider selling the genetic genealogy testing company–and with it, the sensitive DNA data that it’s collected, and stored, from many of its 15 million customers. Customers and their relatives are rightly concerned.
John Skiles Skinner: "I helped build a government AI system. DOGE fired…" - carhenge.club
I helped build a government AI system. DOGE fired me, rolled the AI out to the whole agency, and implied the AI can do my job and the jobs of the others they've fired.
It can't. But, what DOGE accidentally revealed about themselves in the process is fascinating. 🧵