Biglaw Firm 'Profoundly Embarrassed' After Submitting Court Filing Riddled With AI Hallucinations - Above the Law
AI & LLMs General
Deloitte to refund government, admits using AI in $440k report
Add this to the list of court cases in which lawyers had to pay fines for AI generated errors in their documents - do the people trumpeting AI as the wave of the future know this is happening?
AI Hallucination Cases Database – Damien Charlotin
Database tracking legal cases where generative AI produced hallucinated citations submitted in court filings.
California issues historic fine over lawyer’s ChatGPT fabrications
The court of appeals issued an historic fine after 21 of 23 quotes in the lawyer's opening brief were fake. Courts want more AI regulations. This article details other research and instances of AI mistakes, and explains how mistake will grow as the LLMs grow in size
Recommendation on the Ethics of Artificial Intelligence - UNESCO
Crossing the uncanny valley of conversational voice
Those old enough to remember short, choppy and grainy video clips before streaming Netflix in 4k have a better sense of the growth arc of technology. Having a chat with these voice AI bots in 2025 makes it easy to believe that these serve as people's friends someday.
AI as Normal Technology | Knight First Amendment Institute
This sophisticated paper places AI as a "normal technology", setting aside both utopian and dystopian predictions. Given its alignment with development and changes of previous technologies, many of its views are worth considering
We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in our conception
The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory.
UN AI for Good – Law Track Conference: AI Safety and Risk Mitigation
This is the inaugural Law Track Conference of the UN AI for Good global platform. In recognition of the transformative potential of AI in law and justice, this track brings together world-leading companies, academia, policy voices, and legal thinkers to harness the power of AI to promote sustainable, ethical, and responsible law and legal systems.
This seminal event, in collaboration with Stanford Law School, will convene AI thought leaders from industry and academia to explore a range of topics intended to drive understanding of the social good of AI, with particular focus on its applications in the area of law.
Guardrails AI
It's important for educators to be aware of what the business world is doing with regard to the reliability and safety of AI. A brief look at the services this firm offers its client speaks volumes about the valid, real-time concerns regarding the use of AI
How we learned to stop worrying and love ChatGPT: 1,800 researchers on AI tools for science - Paperpile
While their specific features vary, a common thread between these researcher-focused apps is that they emphasize traceability of citations back to the source, even for AI-generated output. This is obviously well-aligned with the academic mindset. We expect detailed traceability will become very widespread in research-focused AI products, and may even become more prominent in general-purpose tools like ChatGPT.
As everyone who’s been through a Ph.D. knows, true insight comes from synthesizing ideas across papers and research fields. Not just from asking questions of a single source. We see more value in the growing long-context capabilities of AI models like Google’s Gemini, which can now reference dozens of full-text papers in a single chat session. We’re also excited by apps that enable accurate AI-powered searches across a wide body of sources, such as NotebookLM, Elicit and Litmaps.
How researchers use AI today
Summarization is one of the more compelling applications of AI in academic literature, so it makes sense that it was the most highly-valued among researchers.
Opinion | For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind - The New York Times
The accusation of genocide being carried out against white farmers is either a horrible moral stain or shameless alarmist disinformation, depending on whom you ask,
It’s tempting to try, though, because it’s hard not to attribute human qualities — smart or dumb, trustworthy or dissembling, helpful or mean — to these bits of code and hardware. Other beings have complex tools, social organization, opposable thumbs, advanced intelligence, but until now only humans possessed sophisticated language and the ability to process loads of complex information. A.I. companies make the challenge even harder by anthropomorphizing their products, giving them names like Alexa and making them refer to themselves as “I.” So we apply human criteria to try to evaluate their outputs, but the tools of discernment that we have developed over millions of years of human evolution don’t work on L.L.M.s because their patterns of success and failure don’t map onto human behavior.
There’s little point in telling people not to use these tools. Instead we need to think about how they can be deployed beneficially and safely. The first step is seeing them for what they are.
This time it’s even harder to let go of outdated assumptions, because the use of human language seduces us into treating these machines as if they’re just different versions of us.
Powerful A.I. Is Coming. We’re Not Ready.
I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful A.I. systems, the investors funding it and the researchers studying its effects. And I’ve come to believe that what’s happening in A.I. right now is bigger than most people understand.
The most disorienting thing about today’s A.I. industry is that the people closest to the technology — the employees and executives of the leading A.I. labs — tend to be the most worried about how fast it’s improving.
I’ve also found many uses for A.I. tools in my work. I don’t use A.I. to write my columns, but I use it for lots of other things — preparing for interviews, summarizing research papers, building personalized apps to help me with administrative tasks.
Most of the advice I’ve heard for how institutions should prepare for A.G.I. boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills. These are all sensible ideas, with or without A.G.I.
Foreign Information Manipulation & Interference A Large Language Model Perspective
This report focus on the intersection of Foreign Information Manipulation and Interference and Large Language Models. The aim is to give a non-technicalcomprehensive understanding of how weaknesses in the language models can be used for creating malicious content to be used in FIMI.
This report focus on the intersection of Foreign Information Manipulation and Interference and Large Language Models. The aim is to give a non-technicalcomprehensive understanding of how weaknesses in the language models can be used for creating malicious content to be used in FIMI.
Revolutionizing Healthcare - Core Concepts: AI Education for Clinicians
Listen to Curtis Langlotz of Stanford talk for a few minutes about the application of AI to clinical trials. The concerns he raises about the transparency of AI tools is just as valuable for educators, historians and social studies teachers.
The Dirty Energy Powering AI | Council on Foreign Relations
Global electricity use is surging with unprecedented demand coming from data centers and AI. Although the transcript of this conversation says more about energy use, geopolitics, and climate change, it places AI in that context
The hard reality number one is that climate change poses a severe and maybe existential threat to American security, and we haven't been taking it that seriously in the foreign policy establishment.
Climate change poses as much of a risk by the end of the century to America and its existence as nuclear warfare
AI is growing from such a small base. Beyond 2030, if AI really does take off, AI power demand globally could just begin to take over the grid. And that's already started to happen in some of the pioneer markets where data centers are particularly heavy, such as in Ireland and increasingly in the United States.
As of March 2024, there are more than 11,000 data centers worldwide, 45 percent of which are housed in the United States -
today in the United States, data centers, overall as a category, consume about 4 percent of American power
today, if we're at 4 percent in the United States for data centers overall, by the end of the decade, 2030—just five and a half years away—we are looking at anywhere from 10 percent to 12 percent of the American grid
ChatGPT query can use 100 times or more energy per query than the simple Google searc
An AI model from the University of Leeds has been trained to measure changes in icebergs 10,000 times faster than humans can.
We need to do that in order to compete with China. China is building AI data centers also at a breakneck pace, and they are not limited by the available power resources. They're actually limited by the availability of high-end chips, which America has some export controls over and is considering even more export controls, for example, on NVIDIA GPUs.
in the part of the United States with the largest concentration of data centers, right here in Northern Virginia, the power utility locally, Dominion Energy, says that it'll take more than seven years to connect the next data center of 100 megawatts or more to the grid. That's an intolerable time horizon for cloud service providers or model developers, OpenAI or lesser-known companies that are developing next-generation models.
As a result, China comes in with some inbuilt advantages in the AI race. It can power as many data centers as it needs to. In order to compete with China, the United States needs to find new sources of power, invest in our energy infrastructure, and make it possible to connect data centers much faster and have them powered with affordable and reliable energy.
Due to the predicted rise of AI data centers, utility companies have nearly doubled their forecasts on how much additional power the United States will need by 2030.
Scotlands artificial intelligence strategy trustworthy ethical inclusive
This March 2021 publication is not as useful for education as it is for seeing the way one country approaches AI. Look at the chart on page 8
Prosak - Get ready for linguistic anarchy
Essay about the dehumanizing and homogenizing effect of AI which when applied to education - maybe this means we need more assignments that tap into what is "human"
Maybe AI prose is just Prosak, the last stage of dehumanizing dullness before a revolution of human-centered, unpredictable creativity sweeps away the prose dullness we’ve been living in and reveals the genuine us: people who don’t perfectly navigate the challenges of adverbs. It’s Pat Boone singing “Tuttie Fruittie” in 1956. It’s Bing Crosby crooning “Little Drummer Boy” with David Bowie in 1977. The creators of artificial intelligence want to mimic and then bind and flatten our creativity in order to monetize it—and us. So maybe the answer is to value what’s real and original in ourselves, even if it’s not pretty.
Joanne Freeman: "College professor peeps: A question. How do you determine when an undergraduate essay has been written by ChatGPT?" — Bluesky
Sit in on a conversation with college professors and high school teachers discussing AI and you'll be better served than attending that district-sponsored AI session - BlueSky account needed
Does it rhyme? AI, Poetry, Scholarship and Schools
One conclusion drawn from the study is that it raises questions about creativity, but for me, it highlights an open problem in schools about how we teach students to value depth and complexity in art and other subject areas.
Harvard Law School Professors and Program Alumni practicing in the field discuss AI in the LAW - LL.M. Centennial | Plenary 4: The Future
Listening to Harvard Law School faculty talk with prominent attorneys about AI's use in their field is worth much more in evaluating its use in the "real world" than anything you can year at an educational PD session (about 35 minutes mark)
Rebind - AI-assisted digital publisher sells books with accompanying AI Bot that provides an interactive, personal guidance and expert commentary on the book
In this application of AI, a recognized expert is interviewed for for 30+ hours and that dialog informs the AI tool sold along with a book. LLM generated text trained on the text and the expert's input serves as the chat for the read. Read the article and then look at Rebind to learn more about it
How large language models work, a visual intro to transformers | Chapter 5, Deep Learning
30 minute video giving graphic explanation of how large language models work. Teachers should spend 5 or 10 minutes of their time, or as much as they can stand, clicking through this video to get a sense of what is being done "under the hood" in an AI tool that allows it to work
Artificial Intelligence, Dreams and Fears of A Blue Dot
Although some of the articles appear dated (2020), the issues raised at this site are useful for exploring the horizon of AI implications
See how the future of jobs is changing in the age of AI | World Economic Forum
It is helpful to see how the "real world" is adapting to AI even if education isn't chiefly utilitarian. Comparing the level of discourse in business and education is interesting
Artificial Intelligence and Intellectual Property - YouTube
This hour long video has lawyers and practitioners discussing legal implications of AI in areas such a patent law.
The Center for AI Safety (CAIS)
Currently, society is ill-prepared to manage the risks from AI. CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage AI risk.
Plato's Argument Against Writing
Plato's argument against writing is a perfect fit for contemplation of AI - words do not make understanding in an of themselves. The fact that Generative text can erupt a volcano of text at the click of a mouse doesn't make it intelligent - but it makes it sound intelligent
WHAT DANGERS DOES ARTIFICIAL INTELLIGENCE POSE TO THE WRITING/TELLING OF HISTORY?
As if you couldn't figure this out for yourself, it might be faster to get their list
The shape of the shadow of The Thing - by Ethan Mollick
We have these pieces which let us guess at the shape of the AI in front of us. It isn’t science fiction to assume that AIs will soon talk to you, see you, know about you, do research for you, create images for you - because all of that is already built, and working. I can already pull all of these elements together myself with just a little effort. That means AI can quite easily serve as personal assistant, intern, and companion - answering emails, giving advice, paying attention to the world around you — in a way that makes the Siris and Alexas of the world look prehistoric.
What's been pinging my AI scanner - by Bryan Alexander
December 2023 seems to mark the time when specific generative tools are targeting specific tasks using specific data sets.