AI-GenAI
Industry leaders should ensure that technical staff understand the project purpose and domain context: Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure. Industry leaders should choose enduring problems: AI projects require time and patience to complete. Before they begin any AI project, leaders should be prepared to commit each product team to solving a specific problem for at least a year. Industry leaders should focus on the problem, not the technology: Successful projects are laser-focused on the problem to be solved, not the technology used to solve it. Industry leaders should invest in infrastructure: Up-front investments in infrastructure to support data governance and model deployment can reduce the time required to complete AI projects and can increase the volume of high-quality data available to train effective AI models. Industry leaders should understand AI's limitations: When considering a potential AI project, leaders need to include technical experts to assess the project's feasibility. Academia leaders should overcome data-collection barriers through partnerships with government: Partnerships between academia and government agencies could give researchers access to data of the provenance needed for academic research. Academia leaders should expand doctoral programs in data science for practitioners: Computer science and data science program leaders should learn from disciplines, such as international relations, in which practitioner doctoral programs often exist side by side at universities to provide pathways for researchers to apply their findings to urgent problems.
Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.
If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation weâre invited to join.
Chegg will lay off 45% of its workforce, cutting 388 jobs as generative AI and declining Google search traffic slash revenue. The company says it is restructuring its academic products while continuing to fund its own AI tools. Dan Rosensweig returns as CEO effective immediately, replacing Nathan Schultz, who shifts to executive advisor. The board ended its strategic review and unanimously chose to keep Chegg independent. Cheggâs stock has crashed 99% from its 2021 high, shrinking its market cap from $14.7 billion to roughly $156 million. The dramatic value loss, coupled with two major layoff rounds this year, shows how quickly AI-driven competition has gutted the firmâs longtime education model.
California State University has launched a sweeping initiative to position itself as the nationâs âlargest A.I.-empoweredâ university. The 22-campus system is paying OpenAI $16.9 million for ChatGPT Edu access and is running an Amazon-backed A.I. camp that trains students on tools like Bedrock. The ChatGPT Edu deal covers more than half a million students and staff, which OpenAI calls its biggest deployment to date. Cal State has also convened an A.I. committee with representatives from a dozen major tech firms to shape the skills employers want from graduates. The move hands unprecedented influence over curriculum to Silicon Valley inside the countryâs biggest public university. Faculty senates on multiple campuses have passed resolutions condemning the arrangement as an expensive surrender of academic independence and rigor.
"Most of those startups depend on Anthropic's model," Burton said. In Burton's view, Anthropic's Claude family of large language models is the best among the AI frontier model makers in solving the problem of automatic code generation. "Anthropic models are better than anyone at code generation."
As a result, "Code generation tools are going to struggle to keep ahead of Anthropic," he said.
Also: The best AI for coding in 2025
Anthropic has built its own IDE on top of Claude, called Claude Code. "Anthropic has got the foundation models. Claude Code is probably going to be good enough," said Burton.
Anthropic researchers published a new study finding that Claude can sometimes notice when concepts are artificially planted in its processing and separate internal âthoughtsâ from what it reads, showing limited introspective capabilities. The details: Specific concepts (like "loudness" or "bread") were implanted into Claude's processing, with the AI correctly noticing something unusual 20% of the time. When shown written text and given injected "thoughts," Claude was able to accurately repeat what it read while separately identifying the planted concept. Models adjusted internally when instructed to "think about" specific words while writing, showing some deliberate control over their processing patterns. Why it matters: This research shows AI may be developing some ability to monitor their own processing, which could make models more transparent by helping accurately explain reasoning. But it could also be a double-edged sword â with systems potentially learning to better conceal and selectively report their thoughts.
The Canva Design Model understands structure and hierarchy to produce completely editable designs, with integration into ChatGPT, Claude, and Gemini. The Creative Operating Systemâs tools include Video 2.0 for streamlined editing, forms, data connectors, email design, and a 3D generator. Grow consolidates marketing workflows by letting teams browse winning ads, create brand-aware variations, publish directly to Meta, and track performance. Canvaâs 2024 acquisition of pro-design tool Affinity is also relaunching as an all-in-one free creative app with built-in Canva integrations. Why it matters: AI design tools have come a long way in the past year, and Canva is keeping itself on pace with the acceleration. Now with its own model and an AI feature for every creative need, the disruptive platform is not only empowering its users, but also reducing the need to ever hop to other rivals or more âprofessionalâ options.
GrapesJS is an open-source visual editor for building websites, landing pages, email templates, and newsletters, whether youâre a designer or anon-technical team. How you can use it: Clone a webpage and customize it instantly Drag-and-drop your way to a landing page or signup flow Export clean, editable code for your devs Embed the editor inside your own product or tools stack
Liminary automatically captures, organizes, and recalls information from articles, PDFs, videos, and meeting transcripts, so you donât lose ideas in the noise. It uses agentic memory recall and connection mapping to surface the right insight at the exact moment you need it, whether youâre drafting strategy, writing a brief, or synthesizing research. How you can use it: Pull key insights from docs without re-reading everything Auto-generate briefs or summaries from saved sources Map relationships between ideas across projects Keep research organized without manual tagging
Character.AI is removing open-ended companion chats for anyone under 18 after increasing concerns about emotional attachment, dependency, and mental health risk among younger users. Hereâs whatâs changing: Companion-style chats are being phased out for all minors. The platform is rolling out stricter age verification. The app will refocus on creative and role-based interactions, not emotional support. Usage time limits will show up before the full removal.