Promoting top frontline performers into supervisors can backfire. Gallup data show why promoting based on supervisory talent and offering managerial training matters.
Journalistic Malpractice: No LLM Ever ‘Admits’ To Anything, And Reporting Otherwise Is A Lie
Over the past week, Reuters, Newsweek, the Daily Beast, CNBC, and a parade of other outlets published headlines claiming that Grok—Elon Musk’s LLM chatbot (the one that once referred to itsel…
Why I’m Really Worried by the Negative Trend of Hallucinations Cases (and It’s Not Because of the Ethics Failures)
A recent eDiscovery Today article reported that cases involving hallucinated citations and excerpts are not just continuing to occur, which would be bad enough, but are actually increasing at a rapid rate. Readers of this site will be familiar with the problem and its ethical implications, so I’ll jump right into the three reasons I’m really worried by this trend.
It’s nearly the end of 2025, and half of the US and the UK now require you to upload your ID or scan your face to watch “sexual content.” A handful of states and Australia now have various
The Job Search and Career Newsletters Worth Your Inbox in 2026
Share: 0 More Most job seekers aren’t struggling because they aren’t trying hard enough. They’re struggling because the advice they’re following is fragmented, outdated, or buried under noise. The result is predictable. Well-qualified professionals spend months applying, networking, and refining their LinkedIn profiles without gaining traction, not because they’re doing nothing, but because they’re doing […]
The Ruling Is In: GenAI Prompts Are Core Discoverable ESI
Think your AI prompts disappear when you hit delete? Not when litigation lands. We unpack the OpenAI copyright MDL to show how courts are turning ChatGPT conversation logs into core electronic evidence—preserved, sampled, de-identified, and produced under a protective order. The result is a clear, repeatable playbook for handling AI data at scale without letting privacy swallow relevance.We walk through the emergency preservation orders that halted deletion across consumer, enterprise, and API logs, then explain why the parties settled on a 20 million chat sample and how de-identification pipelines strip direct identifiers while keeping prompts and outputs analyzable. Along the way, we tackle the big question of relevance: why usage patterns and non-infringing outputs matter for fair use factor four, market harm, and damages, and why a search-term-only approach can’t answer merits questions in a generative AI case.You’ll hear the strategic pivots that shaped the fight—OpenAI’s attempt to narrow production after de-identifying the full sample, the court’s treatment of privacy as part of burden rather than a veto, and the denial of a stay that kept production on track. Then we distill three takeaways for legal teams: prompts are now squarely within the duty to preserve, the sample you propose will likely bind you later, and privacy is a dial you engineer through sampling, de-identification, and AEO protections.Whether your organization uses ChatGPT, Copilot, Gemini, Claude, or in-house LLMs, this episode maps the practical steps: identify where logs live, understand tenant controls and exports, plan system-based discovery alongside key custodian evidence, and build credibility with numbers and workflows you can defend. Subscribe, share with your litigation and privacy teams, and leave a review telling us: how are you preparing your AI preserves and productions for 2026?
Against the Federal Moratorium on State-Level Regulation of AI
Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical ...
This is a pretty significant change for many of us who had been concerned about vetting data protection agreements with Anthropic before allowing user access to the Claude option in Coiplot Chat, and who've also watched new agents for Word, Excel, and PowerPoint rollout, but not to our tenants with Anthropic model access disabled.
Learn why GenAI prompts, responses, and logs may be discoverable in litigation, what courts are signaling, and how legal teams should manage this emerging data type.
M365 News for November 2025 - Mike McBride on M365
This post will be updated throughout the month as new items are added to the tag. Be sure to subscribe to my M365 Newsletter for more M365 expertise and news.
Is Your Teams Meeting Being Recorded? I’d Assume It Is
A few months ago, I wrote about people using AI Notetakers in Teams meetings. I've spoken several times about the privacy implications of recording Teams meetings, using Copilot, and related practices. One thing I've been encouraging people to understand is that, even if you host the meeting and turn off all AI, recording, and transcription