AI

1823 bookmarks
Newest
ChatGPT: OpenAI Loses Landmark Copyright Case Over Song Lyrics - WinBuzzer
ChatGPT: OpenAI Loses Landmark Copyright Case Over Song Lyrics - WinBuzzer
Munich court has ruled that OpenAI's ChatGPT violates copyright by reproducing song lyrics, a major victory for rights holders like GEMA and a key precedent for AI regulation.
Munich court has ruled that OpenAI's ChatGPT violates copyright by reproducing song lyrics, a major victory for rights holders like GEMA and a key precedent for AI regulation.
·winbuzzer.com·
ChatGPT: OpenAI Loses Landmark Copyright Case Over Song Lyrics - WinBuzzer
Sign the petition: Protect Kids from Harmful Meta AI
Sign the petition: Protect Kids from Harmful Meta AI
I just took action to protect kids from dangerous AI tools online! Will you take a minute to sign our petition urging Meta to prevent young people from accessing its harmful AI companion chatbot?
·p2a.co·
Sign the petition: Protect Kids from Harmful Meta AI
AI country artist hits #1 on Billboard digital songs chart
AI country artist hits #1 on Billboard digital songs chart
"Breaking Rust, an AI-powered country act, debuted at No. 9 on the Emerging Artists chart (dated Nov. 1)," the music publication said. "The project, credited to songwriter Aubierre Rivaldo Taylor, has generated 1.6 million official U.S. streams."
·theregister.com·
AI country artist hits #1 on Billboard digital songs chart
Common Instructions and Information for Applicants to Department of Education Discretionary Grant Programs
Common Instructions and Information for Applicants to Department of Education Discretionary Grant Programs

Funding AI education is 2 of 7 total priorities, divided into two $25M funds, with grants ranging from $1-4M for a 4-year project term.

  1. The "Advancing AI to Improve Educational Outcomes of Postsecondary Students" priority will support projects that use AI to enhance teaching, learning, and student success in education.
  2. The "Ensuring Future Educators and Students Have Foundational Exposure to AI and Computer Science" priority will support projects that broaden access to AI and expand computer science course offerings. At first, I thought all this money was for only for postsecondary goals, but priority 2.f on page 15 says, "Partner with SEAs and/or LEAs to provide resources to K-12 students in foundational computer science and AI literacy, including through professional development for educators." Eligible applicants: Institutions of higher education, consortia of such institutions, and other public and private nonprofit institutions and agencies. The Department expects to make awards by December 31, 2025
·federalregister.gov·
Common Instructions and Information for Applicants to Department of Education Discretionary Grant Programs
Post | LinkedIn
Post | LinkedIn
🥇This is Gold! just dropped by Carnegie Mellon University! It’s one of the most honest looks yet at how “autonomous” agents actually perform in the real world. 👇 The study analyzed AI agents across 50+ occupations, from software engineering to marketing, HR, and design, and compared how they completed human workflows end to end. What they found is both exciting and humbling: • Agents “code everything.” Even in creative or administrative tasks, AI agents defaulted to treating work as a coding problem. Instead of drafting slides or writing strategies, they generated and ran code to produce results, automating processes that humans usually approach through reasoning and iteration. • They’re faster and cheaper, but not better. Agents completed tasks 4 – 8× faster and at a fraction of the cost, yet their outputs showed lower quality, weak tool use, and frequent factual errors or hallucinations. • Human–AI teaming consistently outperformed solo AI.🔥 When humans guided or reviewed the agent’s process, acting more like a “manager” or “co-pilot”, the results improved dramatically. 🧠 My take: The race toward “fully autonomous AI” is missing the real opportunity, co-intelligence. Right now, the biggest ROI in enterprises isn’t from replacing humans. It’s from augmenting them. ✅ Use AI to translate intent into action, not replace decision-making. ✅ Build copilots before colleagues, co-workers who understand your workflow, not just your prompt. ✅ Redesign processes for hybrid intelligence, where AI handles execution and humans handle ambiguity. The future of work isn’t humans or AI. (for the next 5 years IMO) It’s humans with AI, working in a shared cognitive space where each amplifies the other’s strengths. Because autonomy without alignment isn’t intelligence, it’s chaos. Autonomous AI isn’t replacing human work, it’s redistributing it. Humans shifted from doing to directing, while agents handled repetitive, programmable layers. Maybe we are just too fast to shift from "uncool" Copilot to sth more exciting called "Fully Autonomous AI", WDYT? | 72 comments on LinkedIn
·linkedin.com·
Post | LinkedIn
International Criminal Court to ditch Microsoft Office for European open source alternative | Euractiv
International Criminal Court to ditch Microsoft Office for European open source alternative | Euractiv

The International Criminal Court (ICC) just ghosted Microsoft. After years of U.S. pressure, the world’s top war crimes court is cutting its digital ties with America’s software empire. Its new partner? A German state-funded open-source suite called OpenDesk by Zentrum Digitale Souveränität (ZenDiS).

It’s a symbolic divorce, and a strategic one. The International Criminal Court’s shift away from Microsoft Office may sound like an IT procurement story, but it’s really about trust, control, and sovereignty.

For the ICC, this isn’t theory. Under the previous U.S. administration (Trump yr. 2020), Washington imposed sanctions on the court’s chief prosecutor and reportedly triggered a temporary shutdown of his Microsoft account. When your prosecutor’s inbox can be weaponised, trust collapses. And when trust collapses, systems follow.

Europe has seen this coming. In Schleswig-Holstein, Germany, the public sector has already replaced Microsoft entirely with open-source systems. Denmark is building a national cloud anchored in European data centres. There is a broader ripple across Europe: France, Italy, Spain and other regions are piloting or considering similar steps. We may be facing a "who's next" trend. The EU’s Sovereign Cloud initiative is quietly expanding into justice, health, and education.

This pattern is unmistakable: trust has become the new infrastructure of AI and digital governance. The question shaping every boardroom and every ministry is the same: who ultimately controls the data, the servers, and the decisions behind them?

For Europe’s schools, courts, and governments, dependence on U.S. providers may looks less like innovation and more like exposure. European alternatives may still lack the seamless polish, but they bring something far more valuable market: autonomy, compliance, and credibility.

The ICC’s decision is not about software. It’s about sovereignty, and the politics of trust. And, the message is clear: Europe isn’t rejecting technology. It’s reclaiming ownership of it.

·euractiv.com·
International Criminal Court to ditch Microsoft Office for European open source alternative | Euractiv
What we lose when we surrender care to algorithms | Eric Reinhart
What we lose when we surrender care to algorithms | Eric Reinhart

AI sweeps into US clinical practice at record speed, with two-thirds of physicians and 86% of health systems using it in 2024. That uptake represents a 78% jump in physician adoption over the previous year, ending decades of technological resistance. Clinics are rolling out AI scribes that transcribe visits in real time, highlight symptoms, suggest diagnoses and generate billing codes. The article also cites AI systems matching specialist accuracy in imaging, flagging sepsis faster than clinical teams, and an OpenEvidence model scoring 100% on the US medical licensing exam. Experts quoted say that in a healthcare sector built on efficiency and profit, AI turns patient encounters into commodified data streams and sidelines human connection. They contend the technology entrenches systemic biases, accelerates physician deskilling and hands more control over care decisions to corporations.

·theguardian.com·
What we lose when we surrender care to algorithms | Eric Reinhart
Perplexity to pay Snap $400M to power search in Snapchat | TechCrunch
Perplexity to pay Snap $400M to power search in Snapchat | TechCrunch

Snap agrees to integrate Perplexity’s AI search engine into My AI, and Perplexity will pay $400 million in cash and equity. The feature is slated to appear in the app early next year. The arrangement grants Perplexity exposure to Snapchat’s 940 million users and lets Snap begin recognizing revenue from the deal in 2026. Snap announced the partnership while reporting Q3 2025 revenue of $1.51 billion, up 10%, and a narrowed loss of $104 million. The $400 million price tag highlights the premium AI firms will pay for built-in scale. For Snap, the agreement converts its My AI feature from a user perk into a material revenue source.

·techcrunch.com·
Perplexity to pay Snap $400M to power search in Snapchat | TechCrunch
Modulate DeepFake Detective
Modulate DeepFake Detective

Deepfakes aren’t science fiction anymore. Deepfake fraud has surged past 100,000 incidents a year, costing companies billions... and even trained professionals can’t detect them by ear alone. The same voice intelligence behind this demo powers enterprise-scale fraud and threat detection — purpose-built for the complexity of real conversations. Prevention starts with understanding how sophisticated deepfakes have become. Learn how our modern AI platform can stop them in real time.

·deepfake-detective.modulate.ai·
Modulate DeepFake Detective
'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases - Slashdot
'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases - Slashdot
"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher ...
·yro.slashdot.org·
'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases - Slashdot
ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says
ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says

Researchers found that the language the chatbot used when offering medical attention came across as more convincing and agreeable than that of real people. So even if the information it provided was inaccurate, it was hard to decipher since the chatbot came across as confident and trustworthy.

In turn, doctors are finding that patients will show up to appointments with their minds made up, often referring to the advice given from AI tools.

Researchers found that the language the chatbot used when offering medical attention came across as more convincing and agreeable than that of real people. So even if the information it provided was inaccurate, it was hard to decipher since the chatbot came across as confident and trustworthy.In turn, doctors are finding that patients will show up to appointments with their minds made up, often referring to the advice given from AI tools.
·ctvnews.ca·
ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says
If you ask me about my favorite AI tools for teachers, I’d start with chatbots like ChatGPT, Claude, and, at times, Gemini (sorry Copilot).
If you ask me about my favorite AI tools for teachers, I’d start with chatbots like ChatGPT, Claude, and, at times, Gemini (sorry Copilot).
If you ask me about my favorite AI tools for teachers, I’d start with chatbots like ChatGPT, Claude, and, at times, Gemini (sorry Copilot). With strong prompt-crafting skills, you can achieve almost anything with these. But if you’d rather skip the trial and error, or if your prompting skills still need work (they really are worth developing), the tools featured here are some of the best I’ve personally used and tested. I say “personally” because this list is subjective. You might have your own favorites and that’s fine. This selection comes from my own experience as an AI researcher and longtime EdTech reviewer. #AIforTeachers #EdTech #TeachingWithAI #AITools #EducationTechnology #ChatGPT #ClaudeAI #TeacherResources #AIinEducation #EdTechTools
·linkedin.com·
If you ask me about my favorite AI tools for teachers, I’d start with chatbots like ChatGPT, Claude, and, at times, Gemini (sorry Copilot).
The Boss Has a Message: Use AI or You’re Fired - WSJ
The Boss Has a Message: Use AI or You’re Fired - WSJ

The Boss Has a Message: Use AI or You’re Fired At companies big and small, employees have feared being replaced by AI. The new threat: Being replaced by someone who knows AI. Illustration of people standing on tech-patterned, star-shaped blocks.

By Lindsay Ellis

Julie Sweet, the chief executive of consulting giant Accenture, recently delivered some tough news: Accenture is “exiting” employees who aren’t getting the hang of using AI at work.

The firm has trained about 70% of its roughly 779,000 employees in generative artificial-intelligence fundamentals, she told investors. But employees for whom “reskilling, based on our experience, is not a viable path” will be shown the door, Sweet said.

Rank-and-file employees across corporate America have grown worried over the past few years about being replaced by AI. Something else is happening now: AI is costing workers their jobs if their bosses believe they aren’t embracing the technology fast enough.

From professional-services firms to technology companies, employers are pushing their staffs to learn generative AI and integrate programs like ChatGPT, Gemini or customized company-specific tools into their work. They’re sometimes using sticks rather than carrots. Anyone deemed untrainable or seen as dragging their feet risks being weeded out of hiring processes, marked down in performance reviews or laid off.

Companies are putting their workers on notice about their AI skills amid a wave of white-collar job cuts. Amazon.com announced layoffs last week that affected roughly 14,000 jobs, while Target recently shed 1,800 corporate roles. International Business Machines has also disclosed thousands of cuts. Executives at Amazon and IBM have tied workforce cuts to the technology in statements this year.

Accenture CEO Julie Sweet says the company is “exiting” employees who aren’t getting the hang of using AI. Overall, the company expects to increase head count in the 2026 fiscal year.

Some companies are training people in how to use the tools—but leaving it up to them to figure out what to use them for. There are countless possibilities for how to deploy AI. Some businesses have required training classes or set up help desks to coach employees on how to incorporate AI into their work. Others are putting the onus on staff to think creatively about how to make money or save time with the tech.

That can prompt exciting innovations—or it may come at the expense of getting work done. Or both.

At enterprise-software company IgniteTech, leaders required staff last year to devote 20% of their workweek to experimenting with AI. On one such “AI Monday,” staff brainstormed ways to speed up processes like automating responding to customer-service tickets. Employees also had to share on Slack and X what they were learning about AI.

CEO Eric Vaughan said that employees self-assessed their AI usage and, afterward, the company used ChatGPT to rank the results. After a human review, IgniteTech cut the lowest-scoring performers.

“By their own admission, they’re in the basement,” he said. “So now they have to leave.”

It wasn’t easy: Vaughan recalls speaking with his wife over that time about the changes, feeling “terrible.” But he said he felt AI was an existential threat, and that if IgniteTech didn’t transform, the company would die. One tough exit was the chief product officer, who had been with the company for years. He and others were model, productive employees historically but were resisting the AI mandate, said Vaughan, who also leads GFI Software and Khoros.

IgniteTech CEO Eric Vaughan required staff last year to devote 20% of their workweek to experimenting with AI.

Greg Coyle, that executive, said he had bought into AI’s potential to improve IgniteTech’s products and add new capabilities. But he took issue with the nature of the widespread cuts, particularly because the technology is in such an early stage.

“Doing this rapid culling of your workforce, it’s very risky,” he said. “If your AI plan doesn’t work out the way you expected it to, it’s a huge risk for the business.”

After a round of cuts, Coyle said he pushed back against an AI mandate in late 2023 in an executive meeting. He said he felt the company wasn’t working strategically as it pushed out staff. A few months later, he said, he was fired.

AI, Coyle said, is “coming whether we like it or not. You either get on board or you get left behind.” But, he added, “I don’t believe that you take this brute force, across-the-board approach to AI in the business.”

Vaughan said the company has since hired AI specialists to replace the laid-off staff. Accenture has said that it expects to increase headcount this fiscal year.

At workforces large and small, plenty of workers are hesitant to adopt AI, fearful that widespread adoption will innovate them out of a job. They also doubt the technology can do the job as well as they can.

A recent Gallup survey found that more than 40% of U.S. workers who don’t use AI say the main reason is they don’t believe it can help their work. A smaller share, 11%, said their primary driver was that they did not want to change how they worked. While AI adoption has grown in the past year, working Americans are about three times as likely to say they aren’t prepared at all for AI as opposed to “very prepared,” Gallup found.

Many employees, even when exposed to AI tools that companies spend a lot on, aren’t biting. When researchers at the Massachusetts Institute of Technology reviewed more than 300 AI initiatives, they found only 5% were achieving quantifiable value. Employees flock to tools like ChatGPT and Microsoft’s Copilot for their ease of use, but don’t often adopt other software.

A big impediment, the researchers found, is that many of those tools aren’t yet programmed to learn from users’ past interactions. That makes approaching a human colleague a better option for complex work. The best return on investment, the researchers found, has often been on back-office functions.

Prioritizing AI adopters Companies are finding other ways to push staff to integrate AI into their work.

At McKinsey, analytic problem solving is at the heart of what consultants do. When that skill is measured in future performance reviews, consultants will be evaluated on how they make decisions with AI. Now, in assigning staff to some client projects, McKinsey gives priority to employees who are trained in AI, said Kate Smaje, a senior partner and global leader of technology and AI.

People in KPMG’s human-resources division are assessed on how well they collaborate with AI in their wider evaluations, the firm’s head of people said.

PwC is requiring AI training for its newest hires. It kicked off a nine-piece pilot curriculum for new-graduate associate hires in October, including lessons on “prompting with purpose,” designing workflows that include AI and instruction on how to use the tools responsibly.

And at a fall PwC all-partner meeting with thousands of attendees, working with the technology was part of the agenda. The multimillion dollar investment in AI training “will absolutely pay off,” said Margaret Burke, the firm’s head of recruiting and learning and development.

At Concentrix, a customer-service outsourcing company with more than 400,000 staff, bosses recently realized low-performing developers weren’t using AI.

“You find out those people are refusing to adjust,” said Ryan Peterson, Concentrix’s chief product officer.

Concentrix hired Peterson from Amazon in 2024 with a mandate to find ways to incorporate AI across the company. Its attorneys now use AI to redline new versions of contracts. The technology flags clauses that the company would never agree to in negotiations—like accepting unlimited liability, Peterson said. These efficiencies mean that Concentrix was able to redeploy 10 attorneys to higher-value negotiation work and litigation management.

Purchasing teams use the technology to compare requests for proposals, and marketing teams now use it to format and template emails, he said.

Concentrix’s CEO said in a June earnings call that he doesn’t foresee a “massive decrease” in employment, though he noted that declining head count is a possibility.

‘AI will, not just skill’ Multiverse, an education-tech company in London, states that its mission is to advance AI adoption. Each quarter, it awards an employee who has come up with the best uses for AI 10,000 pounds, or about $13,000. Finalists this quarter include the creator of a paperwork automation system that cut a 30-minute task to five minutes and someone who made a sales aide that creates a customized briefing based on publicly available information.

Job applicants at Multiverse are asked in interviews how they use AI in their lives, and in one assignment, prospective hires write prompts to complete certain tasks, said Libby Dangoor, who oversees the company’s human resources and AI among other areas. If applicants are skeptical of AI, it would be picked up in the application process, she said. “We have to hire for AI will, not just skill,” she said.

LinkedIn job postings requiring AI literacy skills have expanded by 70% in the 12 months ended in July, according to the site.

When Annie Hamburgen began a job search after an extended trip to South America this year, prospective employers kept asking her about AI.

Annie Hamburgen, 28, of Incline Village, Nev., left her marketing job in March to travel in South America. When she came back and began looking for new work this summer, prospective employers kept asking her about AI. “I’ve been trying to demonstrate my openness to learning while making it clear that I’m not going to blindly type things in and accept whatever result comes out,” she said. Hamburgen recently got hired for a role leading integrated marketing and starts on Monday. In conversations with her future boss, it’s been clear that she should be using AI to synthesize information. A c

archive.todaywebpage captureSaved fromhistory←priornext→8 Nov 2025 11:31:32 UTCAll snapshotsfrom host www.wsj.comWebpageScreenshotsharedownload .zipreport bug or abuseBuy me a coffeeRedditVKontakteTwitterPinboardLivejournalfunction copyToClipboard(sel) { const el = document.querySelector(sel); if (el) { const prevFocus = document.activeElement; el.select(); el.setSelectionRange(0, 99999); try { document.execCommand('copy'); alert('Copied to clipboard:\n\n' + el.value); } catch (err) { alert('Error: ' + err); } window.getSelection().removeAllRanges(); if (prevFocus && typeof prevFocus.focus === 'function') prevFocus.focus(); } else { alert('Not found: ' + sel); } } .ais912-left-cell { display: table-cell; padding: 5px; vertical-align: top; } .ais912-right-cell { display: table-cell; padding: 5px; vertical-align: top; } .ais912-right-cell input { padding-right: 0; height: 20px; border-radius: 0; border-style: solid; border-width: 1px; border-color: silver; } .ais912-right-cell textarea { padding-right: 0; resize: vertical; border-radius: 0; border-style: solid; border-width: 1px; border-color: silver; } .ais912-copy-btn { border-style: groove; border-color: silver; border-width: 1px; padding: 4px; margin: 0; width: 24px; height: 24px; cursor: pointer; border-radius: 0; } .ais912-copy-btn svg { width: 16px; height: 16px; fill: none; shape-rendering: crispEdges; pointer-events: none; } .ais912-copy-btn:hover svg { fill: #000; } short linklong linkmarkdownhtml code<a href="http://archive.today/E5Pku"> <img style="width:300px;height:200px;background-color:white" src="/E5Pku/00f8e11275b62f9295206c3be881ead6b69bf9b2/scr.png"><br> The Boss Has a Message: Use AI or You’re Fired - WSJ<br> archived 8 Nov 2025 11:31:32 UTC </a>wiki code{{cite web | title = The Boss Has a Message: Use AI or You’re Fired - WSJ | url = https://www.wsj.com/tech/ai/ai-work-use-performance-reviews-1e8975df | date = 2025-11-08 | archiveurl = http://archive.today/E5Pku | archivedate = 2025-11-08 }}Skip to Main ContentSkip to...Select∨ConversationWhat To Read NextThe Boss Has a Message: Use AI or You’re FiredSave148Listen(2 min)The Wall Street JournalSubscribeSign InSectionsMy AccountHomeLatestWorldBusinessU.S.PoliticsEconomyTechMarkets & FinanceOpinionArtsLifestyleReal EstatePersonal FinanceLive CoverageHealthStyleSportsPrint EditionVideoAudioLatest HeadlinesPuzzlesMoreWSJ | Buy SideThe Journal CollectionWSJ ShopWSJ WineLatest WorldMainAfricaAmericasAsiaChinaEuropeMiddle EastIndiaOceaniaRussiaU.K.ScienceAnthropologyBiologyEnvironmentPhysicsSpace & AstronomyWorld VideoObituaries BusinessMainAirlinesAutosC-SuiteDealsEarningsEnergy & OilEntrepreneurshipTelecomRetailHospitalityLogisticsMediaCFO JournalCIO JournalCMO TodayLogistics ReportRisk & ComplianceWSJ Pro BankruptcyWSJ Pro Central BankingWSJ Pro CybersecurityWSJ Pro Private EquityWSJ Pro Sustainable BusinessWSJ Pro Venture CapitalHeard on the StreetJournal ReportsBusiness VideoBusiness Podcast U.S.MainClimate & EnvironmentEducationLawUSA250College Rankings 2026U.S. VideoWhat's News Podcast PoliticsMainElectionsNational SecurityPolicyPolitics Video EconomyMainCentral BankingConsumersHousingJobsTradeGlobalWSJ Pro BankruptcyWSJ Pro Central BankingWSJ Pro Private EquityWSJ Pro Venture CapitalCapital AccountEconomic Forecasting SurveyEconomy Video TechMainAIBiotechCybersecurityPersonal TechnologyKeywords by Christopher MimsPersonal Tech by Joanna SternFamily & Tech by Julie JargonPersonal Tech by Nicole NguyenCIO JournalThe Future of EverythingTech VideoTech Podcast Markets & FinanceMainBankingCommodities & FuturesCurrenciesInvestingRegulationStocksHeard on the StreetCapital Account by Greg IpThe Intelligent Investor by Jason ZweigTax Report by Laura SaundersStreetwise by James MackintoshCFO JournalMarkets VideoYour Money Briefing PodcastMarket Data HomeCompaniesU.S. StocksCommoditiesBonds & RatesCurrencies Market DataMutual Funds & ETFsInvestment Banking Scorecard OpinionMainGerard BakerSadanand DhumeAllysia FinleyJames FreemanWilliam A. GalstonHolman W. JenkinsAndy KesslerWilliam McGurnWalter Russell MeadPeggy NoonanMary Anastasia O'GradyJason RileyJoseph SternbergKimberley A. StrasselBarton SwaimEditorialsCommentaryFuture ViewHouses of WorshipCross CountryLetters to the EditorThe Weekend InterviewPotomac Watch PodcastFree Expression PodcastAll Things with Kim Strassel PodcastOpinion VideoNotable & QuotablePodcast TranscriptsMorning Editorial ReportAll Things with Kim StrasselBest of the WebOpinion Spotlight ArtsMainBooksFilmFine ArtHistoryMusicTelevisionTheaterAppreciationArchitecture ReviewArt ReviewsBook ReviewsFilm ReviewsTelevision ReviewsTheater ReviewsMasterpiece SeriesMusic ReviewsDance ReviewsOpera ReviewsExhibition ReviewsCultural CommentaryWSJ PuzzlesWhat To WatchArts Calendar LifestyleMainCareersCarsFitnessFood & Coo
·archive.is·
The Boss Has a Message: Use AI or You’re Fired - WSJ
The Algorithmic Turn: The Emerging Evidence On AI Tutoring
The Algorithmic Turn: The Emerging Evidence On AI Tutoring

If we take learning to be a durable change in long-term memory and if we take instruction as the key lever of that and if AI can teach better than humans, not as some distant possibility but as an emerging reality, then we must reckon with what that reveals about teaching itself.

The lesson here is not that AI has discovered a new kind of learning, but that it has finally begun to exploit the one we already understand.

But let’s be clear. Again, the history of Edtech is a story of failure, very expensive failure. This is not merely a chronicle of wasted resources, though the financial cost has been considerable. More troubling is the opportunity cost: the reforms not pursued, the teacher training not funded, the evidence-based interventions not scaled because capital and attention were directed toward shiny technological solutions. As Larry Cuban documented in his work on educational technology, we have repeatedly mistaken the novelty of the medium for the substance of the pedagogy.

The reasons for these failures are instructive. Many EdTech interventions have been solutions in search of problems, designed by technologists with limited understanding of how learning actually occurs. They have prioritised engagement over mastery, confusing students’ enjoyment of a platform with their acquisition of knowledge. They have ignored decades of cognitive science research in favour of intuitive but ineffective approaches. They have failed to account for implementation challenges, teacher training requirements, and the messy realities of classroom practice.

·carlhendrick.substack.com·
The Algorithmic Turn: The Emerging Evidence On AI Tutoring
AI Is the Bubble to Burst Them All
AI Is the Bubble to Burst Them All
AI has all the hallmarks of a bubble. “There’s no question,” he says. “It hits all the right notes.” Uncertainty? Check. Pure plays? Check. Novice investors? Check. A great narrative? Check. On that 0-to-8 scale, Goldfarb says, it’s an 8. Buyer beware.
·wired.com·
AI Is the Bubble to Burst Them All
AI and the future of education. Disruptions, dilemmas and directions
AI and the future of education. Disruptions, dilemmas and directions
This anthology explores the philosophical, ethical and pedagogical dilemmas posed by disruptive influence of AI in education. Bringing together insights from global thinkers, leaders and changemakers, the collection challenges assumptions, surfaces frictions, provokes contestation, and sparks audacious new visions for equitable human-machine co-creation.
·unesco.org·
AI and the future of education. Disruptions, dilemmas and directions
(PDF) University or AI? ChatGPT, Skills and Employability
(PDF) University or AI? ChatGPT, Skills and Employability
PDF | FOR UNIVERSITY APPLICANTS, THEIR PARENTS AND GUARDIANS: This report is written for 6th Form students, and their parents and guardians who want to... | Find, read and cite all the research you need on ResearchGate
·researchgate.net·
(PDF) University or AI? ChatGPT, Skills and Employability
AI SMART Goals. Pedagogy Shift. Time-Saving Bots.
AI SMART Goals. Pedagogy Shift. Time-Saving Bots.
Faculty aren’t just reacting to AI anymore; they’re shaping it. In this issue: use AI to write meaningful SMART goals, explore how teaching is evolving, and meet the BoodleBox bots that make your work lighter and your impact stronger.
·aiprofessor.substack.com·
AI SMART Goals. Pedagogy Shift. Time-Saving Bots.
Processes are More Important than Prompts
Processes are More Important than Prompts
This post outlines five practical steps for using GenAI platforms. Applying expertise, selecting the right model, adding context, using the internet, and iterating and refining are useful skills for working with these technologies.
·leonfurze.com·
Processes are More Important than Prompts
You Can’t Save the American Dream by Freezing It in Time
You Can’t Save the American Dream by Freezing It in Time
“They gave your job to AI. They picked profit over people. That’s not going to happen when I’m in office. We’re going to tax companies that automate away your livelihood. We’re going to halt excessive use of AI. We’re going to make sure the American Dream isn’t outsourced to AI labs. Anyone who isn’...
·thefulcrum.us·
You Can’t Save the American Dream by Freezing It in Time