AI-GenAI
ChatGPT will let people collaborate together in shared conversations. Key Points: New group chats let friends, family, and coworkers work with ChatGPT in one space. ChatGPT decides when to respond and can react with emojis, images, and shared context. The pilot launches in four regions across Free, Go, Plus, and Pro plans. Details: OpenAI is rolling out group chats that let users collaborate with each other and ChatGPT in the same conversation. People can plan trips, work on shared ideas, or settle debates while ChatGPT follows along and helps when needed. Anyone can join through a link, and chats remain separate from private conversations. The pilot starts in Japan, New Zealand, South Korea, and Taiwan. Why It Matters: Group chats turn ChatGPT into more than a solo brainstorming buddy: it starts to look like a shared workspace that sits on top of your existing group chats. You can co-write docs, plan trips, or debate ideas while everyone sees the same suggestions, summaries, and follow-ups in one place, instead of forwarding screenshots or pasting replies from separate chats. For teams, classrooms, and friend groups, this nudges AI closer to how people actually make decisions together day to day.
new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.
After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.
In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk
what Disney hopes to achieve through AI usage, the House of Mouse is "seeking to not only protect the value of our IP, our creative engines, but also to seek opportunities for us to use their technology to create more engagement with consumers."
Internally, "We see opportunities in terms of efficiency and effectiveness by deploying AI," suggesting that it will impact film and TV production, office workflows, and support for cast members. However, rather than seeking to use AI as a means of replacing its human staff, Disney "has been engaged with our cast members and employees" about how best to utilise it.
Anthropic uncovered a Chinese state-sponsored group that hijacked its Claude Code tool to infiltrate roughly 30 tech, finance, chemical, and government targets. Detected in mid-September 2025, the campaign is the company’s first documented case of an AI-executed espionage operation at scale. Investigators found the AI handled 80–90% of the work—generating exploit code, harvesting credentials, and exfiltrating data, while humans intervened only at 4–6 critical decision points. Anthropic banned the compromised accounts, alerted affected organizations, coordinated with authorities, and has since upgraded its classifiers to flag similar malicious use. The incident shows agentic models can mount high-speed attacks that shred traditional time and expertise barriers for hackers. Anthropic says the episode likely mirrors tactics already employed across other frontier models, signaling a fundamental shift in cybersecurity’s threat landscape.
South Korea’s top three “SKY” universities report that students used ChatGPT and other A.I. tools to cheat on recent online midterms. Each school is treating the misconduct as grounds for automatic zeros on the exams. At Yonsei, 40 students confessed to cheating in an Oct. 15 natural-language-processing test monitored by laptop cameras, while Korea University caught students sharing screen recordings and Seoul National will rerun a compromised statistics exam. All three institutions already have formal guidelines that classify unauthorized A.I. use as academic misconduct. The simultaneous scandals surface as a 2024 survey found over 90 % of South Korean college students with generative-A.I. experience use the tools for coursework. Professors quoted admit traditional testing feels outdated and acknowledge they have few practical means to block A.I. during assessments.
Today we’re starting to roll out GPT-5.1 in ChatGPT. It brings improvements to how enjoyable ChatGPT feels to talk to, and how well it follows what you’re actually asking.
GPT-5.1 Instant is now warmer, more reliable with instructions, and can use reasoning on tougher questions for the first time. GPT-5.1 Thinking adapts its reasoning time to the complexity of the task and gives clearer, more approachable responses.
We’re also beginning to make tone and style easier to personalize, so ChatGPT can respond in a way that feels right for you.