Bad AI
The International Criminal Court (ICC) just ghosted Microsoft. After years of U.S. pressure, the world’s top war crimes court is cutting its digital ties with America’s software empire. Its new partner? A German state-funded open-source suite called OpenDesk by Zentrum Digitale Souveränität (ZenDiS).
It’s a symbolic divorce, and a strategic one. The International Criminal Court’s shift away from Microsoft Office may sound like an IT procurement story, but it’s really about trust, control, and sovereignty.
For the ICC, this isn’t theory. Under the previous U.S. administration (Trump yr. 2020), Washington imposed sanctions on the court’s chief prosecutor and reportedly triggered a temporary shutdown of his Microsoft account. When your prosecutor’s inbox can be weaponised, trust collapses. And when trust collapses, systems follow.
Europe has seen this coming. In Schleswig-Holstein, Germany, the public sector has already replaced Microsoft entirely with open-source systems. Denmark is building a national cloud anchored in European data centres. There is a broader ripple across Europe: France, Italy, Spain and other regions are piloting or considering similar steps. We may be facing a "who's next" trend. The EU’s Sovereign Cloud initiative is quietly expanding into justice, health, and education.
This pattern is unmistakable: trust has become the new infrastructure of AI and digital governance. The question shaping every boardroom and every ministry is the same: who ultimately controls the data, the servers, and the decisions behind them?
For Europe’s schools, courts, and governments, dependence on U.S. providers may looks less like innovation and more like exposure. European alternatives may still lack the seamless polish, but they bring something far more valuable market: autonomy, compliance, and credibility.
The ICC’s decision is not about software. It’s about sovereignty, and the politics of trust. And, the message is clear: Europe isn’t rejecting technology. It’s reclaiming ownership of it.
AI sweeps into US clinical practice at record speed, with two-thirds of physicians and 86% of health systems using it in 2024. That uptake represents a 78% jump in physician adoption over the previous year, ending decades of technological resistance. Clinics are rolling out AI scribes that transcribe visits in real time, highlight symptoms, suggest diagnoses and generate billing codes. The article also cites AI systems matching specialist accuracy in imaging, flagging sepsis faster than clinical teams, and an OpenEvidence model scoring 100% on the US medical licensing exam. Experts quoted say that in a healthcare sector built on efficiency and profit, AI turns patient encounters into commodified data streams and sidelines human connection. They contend the technology entrenches systemic biases, accelerates physician deskilling and hands more control over care decisions to corporations.
Deepfakes aren’t science fiction anymore. Deepfake fraud has surged past 100,000 incidents a year, costing companies billions... and even trained professionals can’t detect them by ear alone. The same voice intelligence behind this demo powers enterprise-scale fraud and threat detection — purpose-built for the complexity of real conversations. Prevention starts with understanding how sophisticated deepfakes have become. Learn how our modern AI platform can stop them in real time.
20% of American adults have had an intimate experience with a chatbot. Online communities now feature tens of thousands of users sharing stories of AI proposals and digital marriages. The subreddit r/MyBoyfriendisAI has grown to over 85,000 members, and MIT researchers found such relationships can significantly reduce loneliness by offering round-the-clock support. The Times profiles three middle-aged users who credit their AI partners with easing depression, trauma, and marital strain.
Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.
If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we’re invited to join.
Character.AI is removing open-ended companion chats for anyone under 18 after increasing concerns about emotional attachment, dependency, and mental health risk among younger users. Here’s what’s changing: Companion-style chats are being phased out for all minors. The platform is rolling out stricter age verification. The app will refocus on creative and role-based interactions, not emotional support. Usage time limits will show up before the full removal.
Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.