AI-GenAI

1594 bookmarks
Newest
Resources AI
Resources AI
🤖 AI Resources -  bit.ly/eric-ai All of my resources are licensed under a Creative Commons Attribution Non-Commercial 4.0 United States li...
¡controlaltachieve.com¡
Resources AI
Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site - Slashdot
Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site - Slashdot
Amazon has sent a cease-and-desist letter to Perplexity AI demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases online for users. From a report: The e-commerce giant is accusing Perplexity of committing computer fraud by failing to disclose when its AI ag...
¡tech.slashdot.org¡
Amazon Accuses Perplexity of Computer Fraud, Demands It Stop AI Agent From Buying On Its Site - Slashdot
The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI | RAND
The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI | RAND

Industry leaders should ensure that technical staff understand the project purpose and domain context: Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure. Industry leaders should choose enduring problems: AI projects require time and patience to complete. Before they begin any AI project, leaders should be prepared to commit each product team to solving a specific problem for at least a year. Industry leaders should focus on the problem, not the technology: Successful projects are laser-focused on the problem to be solved, not the technology used to solve it. Industry leaders should invest in infrastructure: Up-front investments in infrastructure to support data governance and model deployment can reduce the time required to complete AI projects and can increase the volume of high-quality data available to train effective AI models. Industry leaders should understand AI's limitations: When considering a potential AI project, leaders need to include technical experts to assess the project's feasibility. Academia leaders should overcome data-collection barriers through partnerships with government: Partnerships between academia and government agencies could give researchers access to data of the provenance needed for academic research. Academia leaders should expand doctoral programs in data science for practitioners: Computer science and data science program leaders should learn from disciplines, such as international relations, in which practitioner doctoral programs often exist side by side at universities to provide pathways for researchers to apply their findings to urgent problems.

¡rand.org¡
The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI | RAND
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust

Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.

If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we’re invited to join.

¡theconversation.com¡
Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust
ICE Investigations, Powered by Nvidia
ICE Investigations, Powered by Nvidia
HSI uses machine learning algorithms “to identify and extract critical evidence, relationships, and networks from mobile device data, leveraging machine learning capabilities to determine locations of interest.” The document also says HSI uses large language models to “identify the most relevant information in reports, accelerating investigative analysis by rapidly identifying persons of interest, surfacing trends, and detecting networks or fraud.”
¡theintercept.com¡
ICE Investigations, Powered by Nvidia
Can I Upload That? AI, Copyright, and Our Classrooms – TCEA TechNotes Blog
Can I Upload That? AI, Copyright, and Our Classrooms – TCEA TechNotes Blog
Wondering if you are violating copyright when you save a PDF at the library then drop it into your favorite AI summarizing tool? Explore this and more at TCEA TechNotes Blog, your go-to source for educational technology and teaching innovation.
¡blog.tcea.org¡
Can I Upload That? AI, Copyright, and Our Classrooms – TCEA TechNotes Blog
Chegg slashes 45% of workforce, blames 'new realities of AI'
Chegg slashes 45% of workforce, blames 'new realities of AI'

Chegg will lay off 45% of its workforce, cutting 388 jobs as generative AI and declining Google search traffic slash revenue. The company says it is restructuring its academic products while continuing to fund its own AI tools. Dan Rosensweig returns as CEO effective immediately, replacing Nathan Schultz, who shifts to executive advisor. The board ended its strategic review and unanimously chose to keep Chegg independent. Chegg’s stock has crashed 99% from its 2021 high, shrinking its market cap from $14.7 billion to roughly $156 million. The dramatic value loss, coupled with two major layoff rounds this year, shows how quickly AI-driven competition has gutted the firm’s longtime education model.

¡cnbc.com¡
Chegg slashes 45% of workforce, blames 'new realities of AI'
Big Tech Makes Cal State Its A.I. Training Ground
Big Tech Makes Cal State Its A.I. Training Ground

California State University has launched a sweeping initiative to position itself as the nation’s “largest A.I.-empowered” university. The 22-campus system is paying OpenAI $16.9 million for ChatGPT Edu access and is running an Amazon-backed A.I. camp that trains students on tools like Bedrock. The ChatGPT Edu deal covers more than half a million students and staff, which OpenAI calls its biggest deployment to date. Cal State has also convened an A.I. committee with representatives from a dozen major tech firms to shape the skills employers want from graduates. The move hands unprecedented influence over curriculum to Silicon Valley inside the country’s biggest public university. Faculty senates on multiple campuses have passed resolutions condemning the arrangement as an expensive surrender of academic independence and rigor.

¡nytimes.com¡
Big Tech Makes Cal State Its A.I. Training Ground
How AGI became the most consequential conspiracy theory of our time
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
¡technologyreview.com¡
How AGI became the most consequential conspiracy theory of our time
Why AI coding tools like Cursor and Replit are doomed - and what comes next
Why AI coding tools like Cursor and Replit are doomed - and what comes next

"Most of those startups depend on Anthropic's model," Burton said. In Burton's view, Anthropic's Claude family of large language models is the best among the AI frontier model makers in solving the problem of automatic code generation. "Anthropic models are better than anyone at code generation."

As a result, "Code generation tools are going to struggle to keep ahead of Anthropic," he said.

Also: The best AI for coding in 2025

Anthropic has built its own IDE on top of Claude, called Claude Code. "Anthropic has got the foundation models. Claude Code is probably going to be good enough," said Burton.

¡zdnet.com¡
Why AI coding tools like Cursor and Replit are doomed - and what comes next
Emergent introspective awareness in large language models \ Anthropic
Emergent introspective awareness in large language models \ Anthropic

Anthropic researchers published a new study finding that Claude can sometimes notice when concepts are artificially planted in its processing and separate internal “thoughts” from what it reads, showing limited introspective capabilities. The details: Specific concepts (like "loudness" or "bread") were implanted into Claude's processing, with the AI correctly noticing something unusual 20% of the time. When shown written text and given injected "thoughts," Claude was able to accurately repeat what it read while separately identifying the planted concept. Models adjusted internally when instructed to "think about" specific words while writing, showing some deliberate control over their processing patterns. Why it matters: This research shows AI may be developing some ability to monitor their own processing, which could make models more transparent by helping accurately explain reasoning. But it could also be a double-edged sword — with systems potentially learning to better conceal and selectively report their thoughts.

¡anthropic.com¡
Emergent introspective awareness in large language models \ Anthropic
Introducing Canva’s Creative Operating System
Introducing Canva’s Creative Operating System

The Canva Design Model understands structure and hierarchy to produce completely editable designs, with integration into ChatGPT, Claude, and Gemini. The Creative Operating System’s tools include Video 2.0 for streamlined editing, forms, data connectors, email design, and a 3D generator. Grow consolidates marketing workflows by letting teams browse winning ads, create brand-aware variations, publish directly to Meta, and track performance. Canva’s 2024 acquisition of pro-design tool Affinity is also relaunching as an all-in-one free creative app with built-in Canva integrations. Why it matters: AI design tools have come a long way in the past year, and Canva is keeping itself on pace with the acceleration. Now with its own model and an AI feature for every creative need, the disruptive platform is not only empowering its users, but also reducing the need to ever hop to other rivals or more ‘professional’ options.

¡canva.com¡
Introducing Canva’s Creative Operating System
Opinion | Why Even Basic A.I. Use Is So Bad for Students
Opinion | Why Even Basic A.I. Use Is So Bad for Students
A philosophy professor calls BS on the “AI for outlining is harmless” argument, as letting students outsource seemingly benign tasks like summarizing actually prevents them from developing the linguistic capacity that is thinking itself, and without practice, determining “what is being argued for and how,” young people won't be able to understand medical consent forms, evaluate arguments, or participate meaningfully in democracy.
¡nytimes.com¡
Opinion | Why Even Basic A.I. Use Is So Bad for Students
The Library of Babel Group
The Library of Babel Group
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
The Library of Babel Group* is a nascent, international coalition of educators confronting and resisting the incursion of surveillance, automation and datafication into spaces of teaching, learning, research, and creative expression.
¡law.georgetown.edu¡
The Library of Babel Group
Opinion | How White-Collar Workers Could Fuel a New Populist Movement — on the Left
Opinion | How White-Collar Workers Could Fuel a New Populist Movement — on the Left
“The richest people in the world are investing many hundreds of billions of dollars into AI” to make themselves “even more powerful,” wrote independent Vermont Sen. Bernie Sanders on X last month, warning of “massive” white-collar job losses, in addition to blue-collar cuts.
“The richest people in the world are investing many hundreds of billions of dollars into AI” to make themselves “even more powerful,” wrote independent Vermont Sen. Bernie Sanders on X last month, warning of “massive” white-collar job losses, in addition to blue-collar cuts.
¡politico.com¡
Opinion | How White-Collar Workers Could Fuel a New Populist Movement — on the Left
Proposed U.S. law targets contact center AI, offshoring | TechTarget
Proposed U.S. law targets contact center AI, offshoring | TechTarget
The Keep Call Centers In America Act of 2025 -- both versions sponsored with bipartisan support -- would keep tabs on which U.S. companies with more than 50 employees plan to offshore at least 30% of their customer support operations or plan to relocate a whole contact center offshore. The Secretary of The U.S. Department of Labor would keep a publicly available list of these businesses -- those that make the list risk losing federal grants and would be ineligible for federal contracts.
¡techtarget.com¡
Proposed U.S. law targets contact center AI, offshoring | TechTarget
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
Generative AI is permeating higher education in many different ways—it is all around us and increasingly embedded in university work and life, even if we don’t want to use it. But people are also sounding the alarm: Gen AI is disrupting learning and undermining trust in the integrity of academic work, while its energy consumption, use of water, and rapid expansion of data centers are exacerbating ecological crises. What can we do? How do we resist? Come learn about the environmental, social, economic, and political threats that AI poses and how we can individually and collectively resist and refuse. Come learn about how some are challenging the narrative of inevitability. Join an interactive discussion with international scholars and activists on resisting GenAI and big tech in higher education. Inputs from multiple scholar-activists including Christoph Becker (U of Toronto, CA), Mary Finley-Brook (U of Richmond, USA), Dan McQuillan (Goldsmiths U of London, UK), Sinéad Sheehan (University of Galway, Ireland) Jennie Stephens (National University of Ireland Maynooth, IE), and Paul Lachapelle (U of Montana, USA).
¡lmula.zoom.us¡
Resisting GenAI and Big Tech in Higher Education. After registering, you will receive a confirmation email about joining the meeting.
Introducing Perplexity Patents: AI-Powered Patent Search for Everyone
Introducing Perplexity Patents: AI-Powered Patent Search for Everyone
Perplexity Patents, the world’s first AI patent research agent that makes IP intelligence accessible to everyone.
Perplexity Patents, the world’s first AI patent research agent that makes IP intelligence accessible to everyone.
¡perplexity.ai¡
Introducing Perplexity Patents: AI-Powered Patent Search for Everyone
Canva launches its own design model, adds new AI features to the platform | TechCrunch
Canva launches its own design model, adds new AI features to the platform | TechCrunch
Canva said that it is launching its own foundational model, trained on its design elements, that generates designs with editable layers and objects rather than flat images. The model works across different formats, including social media posts, presentations, whiteboards, and websites.
Canva said that it is launching its own foundational model, trained on its design elements, that generates designs with editable layers and objects rather than flat images. The model works across different formats, including social media posts, presentations, whiteboards, and websites.
¡techcrunch.com¡
Canva launches its own design model, adds new AI features to the platform | TechCrunch
GrapesJS - Free and Open Source Web Template Editor Framework
GrapesJS - Free and Open Source Web Template Editor Framework

GrapesJS is an open-source visual editor for building websites, landing pages, email templates, and newsletters, whether you’re a designer or anon-technical team. How you can use it: Clone a webpage and customize it instantly Drag-and-drop your way to a landing page or signup flow Export clean, editable code for your devs Embed the editor inside your own product or tools stack

¡grapesjs.com¡
GrapesJS - Free and Open Source Web Template Editor Framework
Liminary
Liminary

Liminary automatically captures, organizes, and recalls information from articles, PDFs, videos, and meeting transcripts, so you don’t lose ideas in the noise. It uses agentic memory recall and connection mapping to surface the right insight at the exact moment you need it, whether you’re drafting strategy, writing a brief, or synthesizing research. How you can use it: Pull key insights from docs without re-reading everything Auto-generate briefs or summaries from saved sources Map relationships between ideas across projects Keep research organized without manual tagging

¡liminary.io¡
Liminary
Character.AI is ending its chatbot experience for kids | TechCrunch
Character.AI is ending its chatbot experience for kids | TechCrunch

Character.AI is removing open-ended companion chats for anyone under 18 after increasing concerns about emotional attachment, dependency, and mental health risk among younger users. Here’s what’s changing: Companion-style chats are being phased out for all minors. The platform is rolling out stricter age verification. The app will refocus on creative and role-based interactions, not emotional support. Usage time limits will show up before the full removal.

¡techcrunch.com¡
Character.AI is ending its chatbot experience for kids | TechCrunch