AI & LLMs Research

AI & LLMs Research

21 bookmarks
Custom sorting
Research Agenda - United Kingdom AI Security Institute
Research Agenda - United Kingdom AI Security Institute
Scroll down to the "Human Influence" section of the United Kingdom's AI Security Institute to see how they hope to investigate how highly capable AI systems can be used to manipulate, persuade, deceive, or subtly influence humans, and develop methods to measure these impacts.
Research Agenda - United Kingdom AI Security Institute
The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers - Microsoft Research
The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers - Microsoft Research
319 knowledge workers were interviewed in this study which found that increased confidence in AI results brings less critical thinking. Hidden in the abstract is an important observation - GenAI shifts knowledge work to information verification, response integration and task stewardship. I would suggest that information verification will become more and more difficult as more and more content is generated with AI
The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers - Microsoft Research
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

58 participants were divided into three groups to write essays, one with an AI search engine, another with a fill LM tool and a third who only used their brain. EEG analysis and subsequent interviews showed "significantly different neural connectivity patterns and cognitive strategies". In short, this research supports avoiding AI with students as their brains will work less.

This was published before peer review because the authors believe the risk of AI in classrooms, particularly with younger students is so great

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
The Impact of Generative AI on Critical Thinking
The Impact of Generative AI on Critical Thinking
Microsoft's own research confirms something that was already pretty obvious: relying on a text generating machine to come up with answers erodes critical thinking, and is a method favoured by those who never liked doing critical thinking in the first place
The Impact of Generative AI on Critical Thinking
[2303.15056] ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks
[2303.15056] ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks
It's likely that most people won't be able to understand the five sentence abstract of this study, but that would be the point. There is serious science behind all of the talk about AI, and it provides the best indication of AI's implications. Even if you don't understand it completely. In this specific case, people working crowdsourced gig jobs are going to be out of work and the corporations that profit from them will be down one income stream
[2303.15056] ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks
KG Medical Knowledge Poisoning
KG Medical Knowledge Poisoning
Chart to accompany nature article showing how a malicious actor can poison medical data by inserting hoax papers maks it into LLM training data
KG Medical Knowledge Poisoning
Poisoning medical knowledge using large language models - Nature Machine Intelligence
Poisoning medical knowledge using large language models - Nature Machine Intelligence
Most readers will only understand this abstract on a surface level, but that is enough to demonstrate the potential harm to knowledge. More evidence that the concept of "source reliability" is dead. Of course "malicious paper's " are just part of the library to history teachers
Poisoning medical knowledge using large language models - Nature Machine Intelligence
The corruption risks of artificial intelligence 2022 Report of Transparency International of the European Union
The corruption risks of artificial intelligence 2022 Report of Transparency International of the European Union

Just one sentence from this report encapsulates its thesis "Autocratic regimes and a weak rule of law further exacerbate the risk that AI in these societies will be deployed in a corrupt manner, such as by a political or economic leadership seeking self-enrichment or a consolidation of their grip on power via the illegitimate suppression of opposition.

The corruption risks of artificial intelligence 2022 Report of Transparency International of the European Union
LLM03: Training Data Poisoning - OWASP Top 10 for LLM & Generative AI Security
LLM03: Training Data Poisoning - OWASP Top 10 for LLM & Generative AI Security
Reading the examples of LLM vulnerability through the lens of Yochai Benkler's description of a "propaganda feedback loop" makes it clear that AI can aggressively weaponize disinformation campaigns already benefiting from a pre-existing media environment dynamic
LLM03: Training Data Poisoning - OWASP Top 10 for LLM & Generative AI Security
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Although the research is based in computer science of LLMs, history educators with experience of persistent myths, marginalized voices and manipulated narratives can see the implications of the capacity of data to be manipulated in this way
Poisoning Web-Scale Training Datasets is Practical