usamo_report.pdf
Research
2503.23674v1.pdf
Tracing the thoughts of a large language model
AI models are trained and not directly programmed, so we don’t understand how they do most of the things they do. Our new interpretability methods allow us t...
MIT Technology Review
Digital Therapists Get Stressed Too, Study Finds
Chatbots should be built with enough resilience to deal with difficult emotional situations, researchers said.
The Cybernetic Teammate
Having an AI on your team can increase performance, provide expertise, and improve your experience
div The Cybernetic Teammate: A Field Experiment on spanGenerative AI Reshaping Teamwork and Expertise/span /div
spanWe examine how artificial intelligence transforms the core pillars of collaboration—performance, expertise sharing, and social engagement—through a prereg
Consumer Reports’ Assessment of AI Voice Cloning Products - Consumer Reports
Washington, DC – Consumer Reports (CR) released findings today from an assessment of voice cloning products from six companies: Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. CR found that a majority of the products assessed did not have meaningful safeguards to stop fraud or misuse of their product. Many AI voice cloning products enable […]
AI Can Now Predict Career and Educational Success From a Single Face Image
A recent study conducted by researchers from multiple universities claims that AI can predict your career and education based on your face.
About a quarter of U.S. teens have used ChatGPT for schoolwork – double the share in 2023
Teens are far more likely to say it’s acceptable to use ChatGPT for research (54%) than for math problems (29%) and essays (18%).
How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs - The…
archived 30 Dec 2024 07:22:03 UTC
Just 2 hours is all it takes for AI agents to replicate your personality with 85% accuracy
Researchers from Google and Stanford have created accurate AI replicas of more than 1,000 people.
Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies
Chat Generative Pre-Trained Transformer (ChatGPT) has generated excitement and concern in education. While cross-sectional studies have highlighted co…
Study of ChatGPT citations makes dismal reading for publishers | TechCrunch
As more publishers cut content licensing deals with ChatGPT-maker OpenAI, a study put out this week by the Tow Center for Digital Journalism -- looking at
AI reaches new milestone, learns to read sign language in real-time
Discover how AI may soon change the landscape of communication for those who are deaf or hard of hearing.
The AI model achieved a 98% accuracy rate in identifying ASL alphabet gestures, with a near-perfect overall performance score of 99%. This means the system can reliably translate hand movements into recognizable letters, opening up new possibilities for communication technology.
STUDY: Majority of Students Admit to Using AI Against School Policy
Report: Apple Stops Development of iPhone Hardware Subscription Program
Apple has reportedly stopped its development of an iPhone hardware subscription program. The company had planned to launch the program in 2022, delayed it
PIAAC - PIAAC Highlights of U.S. National Results
The Program for the International Assessment of Adult Competencies (PIAAC) — Welcome to PIAAC Results
About — The Collective Intelligence Project
OpenAI is funding research into 'AI morality' | TechCrunch
OpenAI is funding academic research at Duke into algorithms that can predict humans' moral judgements.
Claude favors Kantianism (i.e. focusing on absolute moral rules), while ChatGPT leans every-so-slightly utilitarian (prioritizing the greatest good for the greatest number of people). Is one superior to the other? It depends on who you ask.
Who on Earth Is Using Generative AI ? (English)
Leveraging unconventional data, including website traffic data and Google Trends, this paper unveils the real-time usage patterns of generative artificial intelligence .
A.I. Chatbots Defeated Doctors at Diagnosing Illness
A small study found ChatGPT outdid human physicians when assessing medical case histories, even when those doctors were using a chatbot.
Dr. Chen said he noticed that when he peered into the doctors’ chat logs, “they were treating it like a search engine for directed questions: ‘Is cirrhosis a risk factor for cancer? What are possible diagnoses for eye pain?’”“It was only a fraction of the doctors who realized they could literally copy-paste in the entire case history into the chatbot and just ask it to give a comprehensive answer to the entire question,” Dr. Chen added.“Only a fraction of doctors actually saw the surprisingly smart and comprehensive answers the chatbot was capable of producing.”
GPT-Poetry.pdf
Cato CTRL Threat Research: ProKYC Selling Deepfake Tool for Account Fraud Attacks
Cato CTRL has discovered a threat actor, ProKYC, selling a deepfake tool in the cybercriminal underground to enable new account fraud against cryptocurrency exchanges.
Simple techniques to bypass GenAI text detectors: implications for inclusive education - International Journal of Educational Technology in Higher Education
This study investigates the efficacy of six major Generative AI (GenAI) text detectors when confronted with machine-generated content modified to evade detection (n = 805). We compare these detectors to assess their reliability in identifying AI-generated text in educational settings, where they are increasingly used to address academic integrity concerns. Results show significant reductions in detector accuracy (17.4%) when faced with simple techniques to manipulate the AI generated content. The varying performances of GenAI tools and detectors indicate they cannot currently be recommended for determining academic integrity violations due to accuracy limitations and the potential for false accusation which undermines inclusive and fair assessment practices. However, these tools may support learning and academic integrity when used non-punitively. This study aims to guide educators and institutions in the critical implementation of AI text detectors in higher education, highlighting the importance of exploring alternatives to maintain inclusivity in the face of emerging technologies.
By mimicking these imperfections, AI-generated content can effectively mislead detectors into classifying them as human-authored content.
Understanding the impact of AI on misinformation
IU is leading a federally funded effort to study the ability of AI to amplify online messaging.
Professor tailored AI tutor to physics course. Engagement doubled. — Harvard Gazette
Preliminary findings inspire other large Harvard classes to test approach this fall
Black Students Are More Likely to Be Falsely Accused of Using AI to Cheat
Report notes why this is a problem that educators need to pay closer attention to.
Black students are more than twice as likely as their white or Hispanic peers to have their writing incorrectly flagged as the work of artificial intelligence tools, concludes a report released Sept. 18 by Common Sense Media, a nonprofit that examines the impact of technology on young people.Overall, about 10 percent of teens of any background said they had their work inaccurately identified as generated by an AI tool, Common Sense found. But 20 percent of Black teens were falsely accused of using AI to complete an assignment, compared with 7 percent of white and 10 percent of Latino teens.
See Also
Classroom Technology
Should It Stay or Should It Go? Schools Trim Number of Tech Tools They Use
Ed-tech leaders are culling the wide variety of digital tools teachers embraced over the past two years.
This may be at least partially due to flaws in AI detection software. About 79 percent of teens who had their assignments incorrectly flagged by a teacher also had their work submitted to AI detection software, while 27 percent said their work had not been submitted.AI detection software has already been shown to have problematic biases, even though secondary school teachers commonly use the technology.More than two-thirds—68 percent—of teachers report using an AI detection tool regularly, according to a survey of 460 6th to 12th grade public school teachers conducted for the Center for Democracy & Technology, a nonprofit organization that aims to shape technology policy.But the tools often reflect societal biases. Researchers ran essays written by Chinese students for the Test of English as a Foreign Language, or TOEFL, through seven widely-used detectors. They did the same with a sample of essays written by U.S. 8th graders who were native English speakers.
The tools incorrectly labeled more than half of the TOEFL essays as AI-generated, while accurately classifying the 8th grade essays as human-crafted.Common Sense Media’s findings on Black students could be due to either unfairness in AI detection tools or biases in educators themselves, according to experts.“We know that AI is putting out incredibly biased content,” said Amanda Lenhart, the head of research at Common Sense. “Humans come in with biases and preconceived notions about students in their classroom. AI is just another place in which unfairness is being laid upon students of color.”Put another way, even though AI tools aren’t human themselves, they reflect people’s prejudices, even unconscious ones. “AI is not going to walk us out of our pre-existing biases,” Lenhart said.If a teacher does suspect a student used AI to cheat on an assignment, it’s best to have a conversation with the student before jumping to punitive measures, educators and experts say. Schools also need to craft clear policies on when and how it’s acceptable to use AI to complete schoolwork.The Common Sense report is based on a nationally representative survey conducted from March to May of 1,045 adults in the United States who are the parents or guardians of one or more teens aged 13 to 18, and responses from one of their teenage children. All 18-year-old respondents were still in high school when surveyed.
Copyright Laundering: A New AI Legal Strategy
Washing Away The Sin Of Plagiarism
Students Are Using AI Already. Here’s What They Think Adults Should Know
A new report details what teens think parents and teachers should know about how they use, or don’t use, generative artificial intelligence