AI-GenAI
Use Markdown lists with dashed bullets Add “MUST“ (in caps) before critical requirements Include compositional constraints like “Pulitzer Prize-winning cover photo for The New York Times“ to improve quality Add “NEVER include any text or watermarks“ to avoid unwanted elements
In this tutorial, you will learn how to record and summarize meetings directly in the ChatGPT desktop app without third-party tools like Fireflies or Otter — perfect for companies that block external recording tools or privacy-sensitive teams. Step-by-step: Download the ChatGPT desktop app and log in with a Plus, Pro, Business, or Enterprise account (free accounts don't have full access) Click the "Record" button during your meeting, lecture, or session, and a recording panel appears and runs quietly in the background (always ask permission before recording others) Click "Stop" when finished, then send the recording to ChatGPT for a structured breakdown including summary, key points, action items, and suggested follow-ups Chat with your transcript by asking follow-ups like "Rewrite the summary in bullets for a Slack update" or "Highlight any risks or unanswered questions" Our Take: With ChatGPT record, you get the convenience of tools like Fireflies/Otter without having to invite an awkward bot into every Zoom call.
With Lindy, you can build AI agents and apps in minutes simply by describing what you want in plain English. From inbound lead qualification to AI-powered customer support and full-blown apps, Lindy has hundreds of agents that are ready to work for you 24/7/365.
TikTok is testing a setting that lets users reduce or increase AI-generated videos in their feeds. The company disclosed the change at its European trust and safety forum, noting 1.3 billion clips on the platform now carry an AI label. Users adjust the preference through the “manage topic” menu under “AI-generated content,” a tool that will roll out globally after a few weeks of testing. TikTok will also apply an “AI-made” watermark to material created with its own AI tools or flagged by the C2PA standard to enforce transparency. AI videos remain a minority within the service’s 100 million daily uploads, yet they have fueled moderation and quality concerns. TikTok says automated systems have lowered the volume of shocking content reaching human moderators by 76 percent, highlighting its dual use of AI for user choice and internal safety.
Google releases its Gemini 3 model and unveils Antigravity, an agent-based coding platform that can autonomously execute tasks on a user’s computer. The launch moves the conversation beyond text generation to AI that plans, codes, and coordinates work with human oversight. In real-world tests, Gemini 3 built a playable game from a single prompt and created a full website that summarized years of blog posts, all while routing approvals through an inbox interface. Antigravity reads local files, writes code, conducts web research, and even controls the browser to validate its output. The model also cleaned messy research data, devised fresh hypotheses, executed statistical analysis, and delivered a 14-page journal-style paper with minimal guidance. The author says managing Gemini 3 feels like supervising a capable graduate assistant rather than coaxing a chatbot.
“The most important thing an individual can do is be somewhat less of an individual,” the environmentalist Bill McKibben once said. “Join together with others in movements large enough to have some chance at changing those political and economic ground rules that keep us locked on this current path.”
Now, you know what word I’m about to say next, right? Unionize. If your workplace can be organized, that’ll be a key strategy for allowing you to fight AI policies you disagree with…. According to Harvard political scientist Erica Chenoweth’s research, if you want to achieve systemic social change, you need to mobilize 3.5 percent of the population around your cause. Though we have not yet seen AI-related protests on that scale, we do have data indicating the potential for a broad base. A full 50 percent of Americans are more concerned than excited about the rise of AI in daily life, according to a recent survey from the Pew Research Center. And 73 percent support robust regulation of AI, according to the Future of Life Institute.
ChatGPT will let people collaborate together in shared conversations. Key Points: New group chats let friends, family, and coworkers work with ChatGPT in one space. ChatGPT decides when to respond and can react with emojis, images, and shared context. The pilot launches in four regions across Free, Go, Plus, and Pro plans. Details: OpenAI is rolling out group chats that let users collaborate with each other and ChatGPT in the same conversation. People can plan trips, work on shared ideas, or settle debates while ChatGPT follows along and helps when needed. Anyone can join through a link, and chats remain separate from private conversations. The pilot starts in Japan, New Zealand, South Korea, and Taiwan. Why It Matters: Group chats turn ChatGPT into more than a solo brainstorming buddy: it starts to look like a shared workspace that sits on top of your existing group chats. You can co-write docs, plan trips, or debate ideas while everyone sees the same suggestions, summaries, and follow-ups in one place, instead of forwarding screenshots or pasting replies from separate chats. For teams, classrooms, and friend groups, this nudges AI closer to how people actually make decisions together day to day.
new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.
After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.
In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk
what Disney hopes to achieve through AI usage, the House of Mouse is "seeking to not only protect the value of our IP, our creative engines, but also to seek opportunities for us to use their technology to create more engagement with consumers."
Internally, "We see opportunities in terms of efficiency and effectiveness by deploying AI," suggesting that it will impact film and TV production, office workflows, and support for cast members. However, rather than seeking to use AI as a means of replacing its human staff, Disney "has been engaged with our cast members and employees" about how best to utilise it.
Anthropic uncovered a Chinese state-sponsored group that hijacked its Claude Code tool to infiltrate roughly 30 tech, finance, chemical, and government targets. Detected in mid-September 2025, the campaign is the company’s first documented case of an AI-executed espionage operation at scale. Investigators found the AI handled 80–90% of the work—generating exploit code, harvesting credentials, and exfiltrating data, while humans intervened only at 4–6 critical decision points. Anthropic banned the compromised accounts, alerted affected organizations, coordinated with authorities, and has since upgraded its classifiers to flag similar malicious use. The incident shows agentic models can mount high-speed attacks that shred traditional time and expertise barriers for hackers. Anthropic says the episode likely mirrors tactics already employed across other frontier models, signaling a fundamental shift in cybersecurity’s threat landscape.
South Korea’s top three “SKY” universities report that students used ChatGPT and other A.I. tools to cheat on recent online midterms. Each school is treating the misconduct as grounds for automatic zeros on the exams. At Yonsei, 40 students confessed to cheating in an Oct. 15 natural-language-processing test monitored by laptop cameras, while Korea University caught students sharing screen recordings and Seoul National will rerun a compromised statistics exam. All three institutions already have formal guidelines that classify unauthorized A.I. use as academic misconduct. The simultaneous scandals surface as a 2024 survey found over 90 % of South Korean college students with generative-A.I. experience use the tools for coursework. Professors quoted admit traditional testing feels outdated and acknowledge they have few practical means to block A.I. during assessments.