ChatGPT will let people collaborate together in shared conversations. Key Points: New group chats let friends, family, and coworkers work with ChatGPT in one space. ChatGPT decides when to respond and can react with emojis, images, and shared context. The pilot launches in four regions across Free, Go, Plus, and Pro plans. Details: OpenAI is rolling out group chats that let users collaborate with each other and ChatGPT in the same conversation. People can plan trips, work on shared ideas, or settle debates while ChatGPT follows along and helps when needed. Anyone can join through a link, and chats remain separate from private conversations. The pilot starts in Japan, New Zealand, South Korea, and Taiwan. Why It Matters: Group chats turn ChatGPT into more than a solo brainstorming buddy: it starts to look like a shared workspace that sits on top of your existing group chats. You can co-write docs, plan trips, or debate ideas while everyone sees the same suggestions, summaries, and follow-ups in one place, instead of forwarding screenshots or pasting replies from separate chats. For teams, classrooms, and friend groups, this nudges AI closer to how people actually make decisions together day to day.
Consumer AI
new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.
After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.
In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk
Anthropic uncovered a Chinese state-sponsored group that hijacked its Claude Code tool to infiltrate roughly 30 tech, finance, chemical, and government targets. Detected in mid-September 2025, the campaign is the company’s first documented case of an AI-executed espionage operation at scale. Investigators found the AI handled 80–90% of the work—generating exploit code, harvesting credentials, and exfiltrating data, while humans intervened only at 4–6 critical decision points. Anthropic banned the compromised accounts, alerted affected organizations, coordinated with authorities, and has since upgraded its classifiers to flag similar malicious use. The incident shows agentic models can mount high-speed attacks that shred traditional time and expertise barriers for hackers. Anthropic says the episode likely mirrors tactics already employed across other frontier models, signaling a fundamental shift in cybersecurity’s threat landscape.
South Korea’s top three “SKY” universities report that students used ChatGPT and other A.I. tools to cheat on recent online midterms. Each school is treating the misconduct as grounds for automatic zeros on the exams. At Yonsei, 40 students confessed to cheating in an Oct. 15 natural-language-processing test monitored by laptop cameras, while Korea University caught students sharing screen recordings and Seoul National will rerun a compromised statistics exam. All three institutions already have formal guidelines that classify unauthorized A.I. use as academic misconduct. The simultaneous scandals surface as a 2024 survey found over 90 % of South Korean college students with generative-A.I. experience use the tools for coursework. Professors quoted admit traditional testing feels outdated and acknowledge they have few practical means to block A.I. during assessments.
Today we’re starting to roll out GPT-5.1 in ChatGPT. It brings improvements to how enjoyable ChatGPT feels to talk to, and how well it follows what you’re actually asking.
GPT-5.1 Instant is now warmer, more reliable with instructions, and can use reasoning on tougher questions for the first time. GPT-5.1 Thinking adapts its reasoning time to the complexity of the task and gives clearer, more approachable responses.
We’re also beginning to make tone and style easier to personalize, so ChatGPT can respond in a way that feels right for you.
For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination: Google Search Console (GSC), a tool that developers typically use to monitor search traffic, not lurk private chats.
The International Criminal Court (ICC) just ghosted Microsoft. After years of U.S. pressure, the world’s top war crimes court is cutting its digital ties with America’s software empire. Its new partner? A German state-funded open-source suite called OpenDesk by Zentrum Digitale Souveränität (ZenDiS).
It’s a symbolic divorce, and a strategic one. The International Criminal Court’s shift away from Microsoft Office may sound like an IT procurement story, but it’s really about trust, control, and sovereignty.
For the ICC, this isn’t theory. Under the previous U.S. administration (Trump yr. 2020), Washington imposed sanctions on the court’s chief prosecutor and reportedly triggered a temporary shutdown of his Microsoft account. When your prosecutor’s inbox can be weaponised, trust collapses. And when trust collapses, systems follow.
Europe has seen this coming. In Schleswig-Holstein, Germany, the public sector has already replaced Microsoft entirely with open-source systems. Denmark is building a national cloud anchored in European data centres. There is a broader ripple across Europe: France, Italy, Spain and other regions are piloting or considering similar steps. We may be facing a "who's next" trend. The EU’s Sovereign Cloud initiative is quietly expanding into justice, health, and education.
This pattern is unmistakable: trust has become the new infrastructure of AI and digital governance. The question shaping every boardroom and every ministry is the same: who ultimately controls the data, the servers, and the decisions behind them?
For Europe’s schools, courts, and governments, dependence on U.S. providers may looks less like innovation and more like exposure. European alternatives may still lack the seamless polish, but they bring something far more valuable market: autonomy, compliance, and credibility.
The ICC’s decision is not about software. It’s about sovereignty, and the politics of trust. And, the message is clear: Europe isn’t rejecting technology. It’s reclaiming ownership of it.
AI sweeps into US clinical practice at record speed, with two-thirds of physicians and 86% of health systems using it in 2024. That uptake represents a 78% jump in physician adoption over the previous year, ending decades of technological resistance. Clinics are rolling out AI scribes that transcribe visits in real time, highlight symptoms, suggest diagnoses and generate billing codes. The article also cites AI systems matching specialist accuracy in imaging, flagging sepsis faster than clinical teams, and an OpenEvidence model scoring 100% on the US medical licensing exam. Experts quoted say that in a healthcare sector built on efficiency and profit, AI turns patient encounters into commodified data streams and sidelines human connection. They contend the technology entrenches systemic biases, accelerates physician deskilling and hands more control over care decisions to corporations.
Snap agrees to integrate Perplexity’s AI search engine into My AI, and Perplexity will pay $400 million in cash and equity. The feature is slated to appear in the app early next year. The arrangement grants Perplexity exposure to Snapchat’s 940 million users and lets Snap begin recognizing revenue from the deal in 2026. Snap announced the partnership while reporting Q3 2025 revenue of $1.51 billion, up 10%, and a narrowed loss of $104 million. The $400 million price tag highlights the premium AI firms will pay for built-in scale. For Snap, the agreement converts its My AI feature from a user perk into a material revenue source.
Deepfakes aren’t science fiction anymore. Deepfake fraud has surged past 100,000 incidents a year, costing companies billions... and even trained professionals can’t detect them by ear alone. The same voice intelligence behind this demo powers enterprise-scale fraud and threat detection — purpose-built for the complexity of real conversations. Prevention starts with understanding how sophisticated deepfakes have become. Learn how our modern AI platform can stop them in real time.
Researchers found that the language the chatbot used when offering medical attention came across as more convincing and agreeable than that of real people. So even if the information it provided was inaccurate, it was hard to decipher since the chatbot came across as confident and trustworthy.
In turn, doctors are finding that patients will show up to appointments with their minds made up, often referring to the advice given from AI tools.