AI
For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination: Google Search Console (GSC), a tool that developers typically use to monitor search traffic, not lurk private chats.
Funding AI education is 2 of 7 total priorities, divided into two $25M funds, with grants ranging from $1-4M for a 4-year project term.
- The "Advancing AI to Improve Educational Outcomes of Postsecondary Students" priority will support projects that use AI to enhance teaching, learning, and student success in education.
- The "Ensuring Future Educators and Students Have Foundational Exposure to AI and Computer Science" priority will support projects that broaden access to AI and expand computer science course offerings. At first, I thought all this money was for only for postsecondary goals, but priority 2.f on page 15 says, "Partner with SEAs and/or LEAs to provide resources to K-12 students in foundational computer science and AI literacy, including through professional development for educators." Eligible applicants: Institutions of higher education, consortia of such institutions, and other public and private nonprofit institutions and agencies. The Department expects to make awards by December 31, 2025
The International Criminal Court (ICC) just ghosted Microsoft. After years of U.S. pressure, the world’s top war crimes court is cutting its digital ties with America’s software empire. Its new partner? A German state-funded open-source suite called OpenDesk by Zentrum Digitale Souveränität (ZenDiS).
It’s a symbolic divorce, and a strategic one. The International Criminal Court’s shift away from Microsoft Office may sound like an IT procurement story, but it’s really about trust, control, and sovereignty.
For the ICC, this isn’t theory. Under the previous U.S. administration (Trump yr. 2020), Washington imposed sanctions on the court’s chief prosecutor and reportedly triggered a temporary shutdown of his Microsoft account. When your prosecutor’s inbox can be weaponised, trust collapses. And when trust collapses, systems follow.
Europe has seen this coming. In Schleswig-Holstein, Germany, the public sector has already replaced Microsoft entirely with open-source systems. Denmark is building a national cloud anchored in European data centres. There is a broader ripple across Europe: France, Italy, Spain and other regions are piloting or considering similar steps. We may be facing a "who's next" trend. The EU’s Sovereign Cloud initiative is quietly expanding into justice, health, and education.
This pattern is unmistakable: trust has become the new infrastructure of AI and digital governance. The question shaping every boardroom and every ministry is the same: who ultimately controls the data, the servers, and the decisions behind them?
For Europe’s schools, courts, and governments, dependence on U.S. providers may looks less like innovation and more like exposure. European alternatives may still lack the seamless polish, but they bring something far more valuable market: autonomy, compliance, and credibility.
The ICC’s decision is not about software. It’s about sovereignty, and the politics of trust. And, the message is clear: Europe isn’t rejecting technology. It’s reclaiming ownership of it.
AI sweeps into US clinical practice at record speed, with two-thirds of physicians and 86% of health systems using it in 2024. That uptake represents a 78% jump in physician adoption over the previous year, ending decades of technological resistance. Clinics are rolling out AI scribes that transcribe visits in real time, highlight symptoms, suggest diagnoses and generate billing codes. The article also cites AI systems matching specialist accuracy in imaging, flagging sepsis faster than clinical teams, and an OpenEvidence model scoring 100% on the US medical licensing exam. Experts quoted say that in a healthcare sector built on efficiency and profit, AI turns patient encounters into commodified data streams and sidelines human connection. They contend the technology entrenches systemic biases, accelerates physician deskilling and hands more control over care decisions to corporations.
Snap agrees to integrate Perplexity’s AI search engine into My AI, and Perplexity will pay $400 million in cash and equity. The feature is slated to appear in the app early next year. The arrangement grants Perplexity exposure to Snapchat’s 940 million users and lets Snap begin recognizing revenue from the deal in 2026. Snap announced the partnership while reporting Q3 2025 revenue of $1.51 billion, up 10%, and a narrowed loss of $104 million. The $400 million price tag highlights the premium AI firms will pay for built-in scale. For Snap, the agreement converts its My AI feature from a user perk into a material revenue source.
Deepfakes aren’t science fiction anymore. Deepfake fraud has surged past 100,000 incidents a year, costing companies billions... and even trained professionals can’t detect them by ear alone. The same voice intelligence behind this demo powers enterprise-scale fraud and threat detection — purpose-built for the complexity of real conversations. Prevention starts with understanding how sophisticated deepfakes have become. Learn how our modern AI platform can stop them in real time.
Researchers found that the language the chatbot used when offering medical attention came across as more convincing and agreeable than that of real people. So even if the information it provided was inaccurate, it was hard to decipher since the chatbot came across as confident and trustworthy.
In turn, doctors are finding that patients will show up to appointments with their minds made up, often referring to the advice given from AI tools.
The Boss Has a Message: Use AI or You’re Fired At companies big and small, employees have feared being replaced by AI. The new threat: Being replaced by someone who knows AI. Illustration of people standing on tech-patterned, star-shaped blocks.
By Lindsay Ellis
Julie Sweet, the chief executive of consulting giant Accenture, recently delivered some tough news: Accenture is “exiting” employees who aren’t getting the hang of using AI at work.
The firm has trained about 70% of its roughly 779,000 employees in generative artificial-intelligence fundamentals, she told investors. But employees for whom “reskilling, based on our experience, is not a viable path” will be shown the door, Sweet said.
Rank-and-file employees across corporate America have grown worried over the past few years about being replaced by AI. Something else is happening now: AI is costing workers their jobs if their bosses believe they aren’t embracing the technology fast enough.
From professional-services firms to technology companies, employers are pushing their staffs to learn generative AI and integrate programs like ChatGPT, Gemini or customized company-specific tools into their work. They’re sometimes using sticks rather than carrots. Anyone deemed untrainable or seen as dragging their feet risks being weeded out of hiring processes, marked down in performance reviews or laid off.
Companies are putting their workers on notice about their AI skills amid a wave of white-collar job cuts. Amazon.com announced layoffs last week that affected roughly 14,000 jobs, while Target recently shed 1,800 corporate roles. International Business Machines has also disclosed thousands of cuts. Executives at Amazon and IBM have tied workforce cuts to the technology in statements this year.
Accenture CEO Julie Sweet says the company is “exiting” employees who aren’t getting the hang of using AI. Overall, the company expects to increase head count in the 2026 fiscal year.
Some companies are training people in how to use the tools—but leaving it up to them to figure out what to use them for. There are countless possibilities for how to deploy AI. Some businesses have required training classes or set up help desks to coach employees on how to incorporate AI into their work. Others are putting the onus on staff to think creatively about how to make money or save time with the tech.
That can prompt exciting innovations—or it may come at the expense of getting work done. Or both.
At enterprise-software company IgniteTech, leaders required staff last year to devote 20% of their workweek to experimenting with AI. On one such “AI Monday,” staff brainstormed ways to speed up processes like automating responding to customer-service tickets. Employees also had to share on Slack and X what they were learning about AI.
CEO Eric Vaughan said that employees self-assessed their AI usage and, afterward, the company used ChatGPT to rank the results. After a human review, IgniteTech cut the lowest-scoring performers.
“By their own admission, they’re in the basement,” he said. “So now they have to leave.”
It wasn’t easy: Vaughan recalls speaking with his wife over that time about the changes, feeling “terrible.” But he said he felt AI was an existential threat, and that if IgniteTech didn’t transform, the company would die. One tough exit was the chief product officer, who had been with the company for years. He and others were model, productive employees historically but were resisting the AI mandate, said Vaughan, who also leads GFI Software and Khoros.
IgniteTech CEO Eric Vaughan required staff last year to devote 20% of their workweek to experimenting with AI.
Greg Coyle, that executive, said he had bought into AI’s potential to improve IgniteTech’s products and add new capabilities. But he took issue with the nature of the widespread cuts, particularly because the technology is in such an early stage.
“Doing this rapid culling of your workforce, it’s very risky,” he said. “If your AI plan doesn’t work out the way you expected it to, it’s a huge risk for the business.”
After a round of cuts, Coyle said he pushed back against an AI mandate in late 2023 in an executive meeting. He said he felt the company wasn’t working strategically as it pushed out staff. A few months later, he said, he was fired.
AI, Coyle said, is “coming whether we like it or not. You either get on board or you get left behind.” But, he added, “I don’t believe that you take this brute force, across-the-board approach to AI in the business.”
Vaughan said the company has since hired AI specialists to replace the laid-off staff. Accenture has said that it expects to increase headcount this fiscal year.
At workforces large and small, plenty of workers are hesitant to adopt AI, fearful that widespread adoption will innovate them out of a job. They also doubt the technology can do the job as well as they can.
A recent Gallup survey found that more than 40% of U.S. workers who don’t use AI say the main reason is they don’t believe it can help their work. A smaller share, 11%, said their primary driver was that they did not want to change how they worked. While AI adoption has grown in the past year, working Americans are about three times as likely to say they aren’t prepared at all for AI as opposed to “very prepared,” Gallup found.
Many employees, even when exposed to AI tools that companies spend a lot on, aren’t biting. When researchers at the Massachusetts Institute of Technology reviewed more than 300 AI initiatives, they found only 5% were achieving quantifiable value. Employees flock to tools like ChatGPT and Microsoft’s Copilot for their ease of use, but don’t often adopt other software.
A big impediment, the researchers found, is that many of those tools aren’t yet programmed to learn from users’ past interactions. That makes approaching a human colleague a better option for complex work. The best return on investment, the researchers found, has often been on back-office functions.
Prioritizing AI adopters Companies are finding other ways to push staff to integrate AI into their work.
At McKinsey, analytic problem solving is at the heart of what consultants do. When that skill is measured in future performance reviews, consultants will be evaluated on how they make decisions with AI. Now, in assigning staff to some client projects, McKinsey gives priority to employees who are trained in AI, said Kate Smaje, a senior partner and global leader of technology and AI.
People in KPMG’s human-resources division are assessed on how well they collaborate with AI in their wider evaluations, the firm’s head of people said.
PwC is requiring AI training for its newest hires. It kicked off a nine-piece pilot curriculum for new-graduate associate hires in October, including lessons on “prompting with purpose,” designing workflows that include AI and instruction on how to use the tools responsibly.
And at a fall PwC all-partner meeting with thousands of attendees, working with the technology was part of the agenda. The multimillion dollar investment in AI training “will absolutely pay off,” said Margaret Burke, the firm’s head of recruiting and learning and development.
At Concentrix, a customer-service outsourcing company with more than 400,000 staff, bosses recently realized low-performing developers weren’t using AI.
“You find out those people are refusing to adjust,” said Ryan Peterson, Concentrix’s chief product officer.
Concentrix hired Peterson from Amazon in 2024 with a mandate to find ways to incorporate AI across the company. Its attorneys now use AI to redline new versions of contracts. The technology flags clauses that the company would never agree to in negotiations—like accepting unlimited liability, Peterson said. These efficiencies mean that Concentrix was able to redeploy 10 attorneys to higher-value negotiation work and litigation management.
Purchasing teams use the technology to compare requests for proposals, and marketing teams now use it to format and template emails, he said.
Concentrix’s CEO said in a June earnings call that he doesn’t foresee a “massive decrease” in employment, though he noted that declining head count is a possibility.
‘AI will, not just skill’ Multiverse, an education-tech company in London, states that its mission is to advance AI adoption. Each quarter, it awards an employee who has come up with the best uses for AI 10,000 pounds, or about $13,000. Finalists this quarter include the creator of a paperwork automation system that cut a 30-minute task to five minutes and someone who made a sales aide that creates a customized briefing based on publicly available information.
Job applicants at Multiverse are asked in interviews how they use AI in their lives, and in one assignment, prospective hires write prompts to complete certain tasks, said Libby Dangoor, who oversees the company’s human resources and AI among other areas. If applicants are skeptical of AI, it would be picked up in the application process, she said. “We have to hire for AI will, not just skill,” she said.
LinkedIn job postings requiring AI literacy skills have expanded by 70% in the 12 months ended in July, according to the site.
When Annie Hamburgen began a job search after an extended trip to South America this year, prospective employers kept asking her about AI.
Annie Hamburgen, 28, of Incline Village, Nev., left her marketing job in March to travel in South America. When she came back and began looking for new work this summer, prospective employers kept asking her about AI. “I’ve been trying to demonstrate my openness to learning while making it clear that I’m not going to blindly type things in and accept whatever result comes out,” she said. Hamburgen recently got hired for a role leading integrated marketing and starts on Monday. In conversations with her future boss, it’s been clear that she should be using AI to synthesize information. A c
If we take learning to be a durable change in long-term memory and if we take instruction as the key lever of that and if AI can teach better than humans, not as some distant possibility but as an emerging reality, then we must reckon with what that reveals about teaching itself.
The lesson here is not that AI has discovered a new kind of learning, but that it has finally begun to exploit the one we already understand.
But let’s be clear. Again, the history of Edtech is a story of failure, very expensive failure. This is not merely a chronicle of wasted resources, though the financial cost has been considerable. More troubling is the opportunity cost: the reforms not pursued, the teacher training not funded, the evidence-based interventions not scaled because capital and attention were directed toward shiny technological solutions. As Larry Cuban documented in his work on educational technology, we have repeatedly mistaken the novelty of the medium for the substance of the pedagogy.
The reasons for these failures are instructive. Many EdTech interventions have been solutions in search of problems, designed by technologists with limited understanding of how learning actually occurs. They have prioritised engagement over mastery, confusing students’ enjoyment of a platform with their acquisition of knowledge. They have ignored decades of cognitive science research in favour of intuitive but ineffective approaches. They have failed to account for implementation challenges, teacher training requirements, and the messy realities of classroom practice.