ai

ai

209 bookmarks
Custom sorting
LLMs und Bildgeneratoren* in der Schule – #KIBedenken
LLMs und Bildgeneratoren* in der Schule – #KIBedenken
Der Vorzeichenfehler in der Debatte über generative ToolsKein Text, kein Vortrag, kein Podcast, kein Gespräch in dieser Debatte sollte ohne einen Prolog zur begrifflichen Schärfung und zur spr
·seagent.de·
LLMs und Bildgeneratoren* in der Schule – #KIBedenken
Hidde (@hdv@front-end.social)
Hidde (@hdv@front-end.social)
Scarlett Johansson says OpenAI approached her to use her voice long before it was released. She said no and they still took it. https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her Now, in a blog post OpenAI says: “Sky’s voice is not an imitation of Scarlett Johansson” https://openai.com/index/how-the-voices-for-chatgpt-were-chosen/ But their CEO even tweeted the word “her” as a way of announcing the voice. post truth much? this company just keeps on grabbing what isn't theirs
·front-end.social·
Hidde (@hdv@front-end.social)
Testing of detection tools for AI-generated text - International Journal for Educational Integrity
Testing of detection tools for AI-generated text - International Journal for Educational Integrity
Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.
·edintegrity.biomedcentral.com·
Testing of detection tools for AI-generated text - International Journal for Educational Integrity
Jason Lefkowitz (@jalefkowit@vmst.io)
Jason Lefkowitz (@jalefkowit@vmst.io)
Generative AI finally puts the computer in a place a certain kind of technologist has always wanted it to be: an inscrutable oracle, whose operations cannot be questioned even when they produce output that is painfully, obviously wrong
·vmst.io·
Jason Lefkowitz (@jalefkowit@vmst.io)
Der KI-Hype trifft auf die ersten Spuren von Realität
Der KI-Hype trifft auf die ersten Spuren von Realität
Große Anbieter schrauben klammheimlich ihre Erwartungen zurück, Start-ups machen kein Geld, und dahinter steht ohnehin meist Big Tech. Viele zentrale Fragen sind weiterhin offen
·derstandard.de·
Der KI-Hype trifft auf die ersten Spuren von Realität
AI isn't useless. But is it worth it?
AI isn't useless. But is it worth it?
AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.
·citationneeded.news·
AI isn't useless. But is it worth it?
Adam Harvey (@adam_harvey@tldr.nettime.org)
Adam Harvey (@adam_harvey@tldr.nettime.org)
Attached: 4 images Adobe Firefly GenAI won’t let users generate image of person giving the “middle finger” but it does allow the “center finger”
·tldr.nettime.org·
Adam Harvey (@adam_harvey@tldr.nettime.org)
The Scale of the Brain vs Machine Learning
The Scale of the Brain vs Machine Learning
Epistemic status: pretty uncertain. There is a lot of fairly unreliable data in the literature and I make some pretty crude assumptions. Nevertheless, I would be surprised though if my conclusions are more than 1-2 OOMs off though. The brain is currently our sole example of an AGI. Even small...
·beren.io·
The Scale of the Brain vs Machine Learning
nixCraft 🐧 (@nixCraft@mastodon.social)
nixCraft 🐧 (@nixCraft@mastodon.social)
Attached: 1 video Sam Altman tells a room full of VCs with a straight face that he will take billions of their money, build AGI, and then ask it how to generate a return! — OpenAI business plan.
·mastodon.social·
nixCraft 🐧 (@nixCraft@mastodon.social)
The Stilwell Brain
The Stilwell Brain
There are 100 billion individual neurons in the human brain. Working together, they allow us to make sense of, and move through, the world around us. Scientists have built replicas of the human brain with computers, but no one has ever successfully made a brain out of humans. On this episode, I’ll travel back to my hometown of Stilwell, Kansas, and turn it into a working brain!Available with YouTube Premium - https://www.youtube.com/premium/originals. To see if Premium is available in your country, click here: https://goo.gl/A3HtfP
·youtube.com·
The Stilwell Brain
OpenAI and Meta ready new AI models capable of ‘reasoning’
OpenAI and Meta ready new AI models capable of ‘reasoning’

Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.

Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.

·archive.ph·
OpenAI and Meta ready new AI models capable of ‘reasoning’