Bad AI
“The most important thing an individual can do is be somewhat less of an individual,” the environmentalist Bill McKibben once said. “Join together with others in movements large enough to have some chance at changing those political and economic ground rules that keep us locked on this current path.”
Now, you know what word I’m about to say next, right? Unionize. If your workplace can be organized, that’ll be a key strategy for allowing you to fight AI policies you disagree with…. According to Harvard political scientist Erica Chenoweth’s research, if you want to achieve systemic social change, you need to mobilize 3.5 percent of the population around your cause. Though we have not yet seen AI-related protests on that scale, we do have data indicating the potential for a broad base. A full 50 percent of Americans are more concerned than excited about the rise of AI in daily life, according to a recent survey from the Pew Research Center. And 73 percent support robust regulation of AI, according to the Future of Life Institute.
new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.
After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.
In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk
Anthropic uncovered a Chinese state-sponsored group that hijacked its Claude Code tool to infiltrate roughly 30 tech, finance, chemical, and government targets. Detected in mid-September 2025, the campaign is the company’s first documented case of an AI-executed espionage operation at scale. Investigators found the AI handled 80–90% of the work—generating exploit code, harvesting credentials, and exfiltrating data, while humans intervened only at 4–6 critical decision points. Anthropic banned the compromised accounts, alerted affected organizations, coordinated with authorities, and has since upgraded its classifiers to flag similar malicious use. The incident shows agentic models can mount high-speed attacks that shred traditional time and expertise barriers for hackers. Anthropic says the episode likely mirrors tactics already employed across other frontier models, signaling a fundamental shift in cybersecurity’s threat landscape.
South Korea’s top three “SKY” universities report that students used ChatGPT and other A.I. tools to cheat on recent online midterms. Each school is treating the misconduct as grounds for automatic zeros on the exams. At Yonsei, 40 students confessed to cheating in an Oct. 15 natural-language-processing test monitored by laptop cameras, while Korea University caught students sharing screen recordings and Seoul National will rerun a compromised statistics exam. All three institutions already have formal guidelines that classify unauthorized A.I. use as academic misconduct. The simultaneous scandals surface as a 2024 survey found over 90 % of South Korean college students with generative-A.I. experience use the tools for coursework. Professors quoted admit traditional testing feels outdated and acknowledge they have few practical means to block A.I. during assessments.
The International Criminal Court (ICC) just ghosted Microsoft. After years of U.S. pressure, the world’s top war crimes court is cutting its digital ties with America’s software empire. Its new partner? A German state-funded open-source suite called OpenDesk by Zentrum Digitale Souveränität (ZenDiS).
It’s a symbolic divorce, and a strategic one. The International Criminal Court’s shift away from Microsoft Office may sound like an IT procurement story, but it’s really about trust, control, and sovereignty.
For the ICC, this isn’t theory. Under the previous U.S. administration (Trump yr. 2020), Washington imposed sanctions on the court’s chief prosecutor and reportedly triggered a temporary shutdown of his Microsoft account. When your prosecutor’s inbox can be weaponised, trust collapses. And when trust collapses, systems follow.
Europe has seen this coming. In Schleswig-Holstein, Germany, the public sector has already replaced Microsoft entirely with open-source systems. Denmark is building a national cloud anchored in European data centres. There is a broader ripple across Europe: France, Italy, Spain and other regions are piloting or considering similar steps. We may be facing a "who's next" trend. The EU’s Sovereign Cloud initiative is quietly expanding into justice, health, and education.
This pattern is unmistakable: trust has become the new infrastructure of AI and digital governance. The question shaping every boardroom and every ministry is the same: who ultimately controls the data, the servers, and the decisions behind them?
For Europe’s schools, courts, and governments, dependence on U.S. providers may looks less like innovation and more like exposure. European alternatives may still lack the seamless polish, but they bring something far more valuable market: autonomy, compliance, and credibility.
The ICC’s decision is not about software. It’s about sovereignty, and the politics of trust. And, the message is clear: Europe isn’t rejecting technology. It’s reclaiming ownership of it.
AI sweeps into US clinical practice at record speed, with two-thirds of physicians and 86% of health systems using it in 2024. That uptake represents a 78% jump in physician adoption over the previous year, ending decades of technological resistance. Clinics are rolling out AI scribes that transcribe visits in real time, highlight symptoms, suggest diagnoses and generate billing codes. The article also cites AI systems matching specialist accuracy in imaging, flagging sepsis faster than clinical teams, and an OpenEvidence model scoring 100% on the US medical licensing exam. Experts quoted say that in a healthcare sector built on efficiency and profit, AI turns patient encounters into commodified data streams and sidelines human connection. They contend the technology entrenches systemic biases, accelerates physician deskilling and hands more control over care decisions to corporations.
Deepfakes aren’t science fiction anymore. Deepfake fraud has surged past 100,000 incidents a year, costing companies billions... and even trained professionals can’t detect them by ear alone. The same voice intelligence behind this demo powers enterprise-scale fraud and threat detection — purpose-built for the complexity of real conversations. Prevention starts with understanding how sophisticated deepfakes have become. Learn how our modern AI platform can stop them in real time.
20% of American adults have had an intimate experience with a chatbot. Online communities now feature tens of thousands of users sharing stories of AI proposals and digital marriages. The subreddit r/MyBoyfriendisAI has grown to over 85,000 members, and MIT researchers found such relationships can significantly reduce loneliness by offering round-the-clock support. The Times profiles three middle-aged users who credit their AI partners with easing depression, trauma, and marital strain.