Vortrag auf der re:publica 2024, Quellen und Artikel zur Superintelligenz
ai
LLMs und Bildgeneratoren* in der Schule – #KIBedenken
Der Vorzeichenfehler in der Debatte über generative ToolsKein Text, kein Vortrag, kein Podcast, kein Gespräch in dieser Debatte sollte ohne einen Prolog zur begrifflichen Schärfung und zur spr
Hidde (@hdv@front-end.social)
Scarlett Johansson says OpenAI approached her to use her voice long before it was released. She said no and they still took it. https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her Now, in a blog post OpenAI says: “Sky’s voice is not an imitation of Scarlett Johansson” https://openai.com/index/how-the-voices-for-chatgpt-were-chosen/ But their CEO even tweeted the word “her” as a way of announcing the voice. post truth much? this company just keeps on grabbing what isn't theirs
Testing of detection tools for AI-generated text - International Journal for Educational Integrity
Recent advances in generative pre-trained transformer large language models have emphasised the potential risks of unfair use of artificial intelligence (AI) generated content in an academic environment and intensified efforts in searching for solutions to detect such content. The paper examines the general functionality of detection tools for AI-generated text and evaluates them based on accuracy and error type analysis. Specifically, the study seeks to answer research questions about whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text, and whether machine translation and content obfuscation techniques affect the detection of AI-generated text. The research covers 12 publicly available tools and two commercial systems (Turnitin and PlagiarismCheck) that are widely used in the academic setting. The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text. Furthermore, content obfuscation techniques significantly worsen the performance of tools. The study makes several significant contributions. First, it summarises up-to-date similar scientific and non-scientific efforts in the field. Second, it presents the result of one of the most comprehensive tests conducted so far, based on a rigorous research methodology, an original document set, and a broad coverage of tools. Third, it discusses the implications and drawbacks of using detection tools for AI-generated text in academic settings.
Erfundene Kandidaten und Schweigen: Wie Chatbots in Politik-Fragen vor der EU-Wahl versagen
Vor der EU-Wahl: Die Chatbots Google Gemini, Microsoft Copilot und ChatGPT beantworten politische Fragen häufig gar nicht oder falsch.
Jason Lefkowitz (@jalefkowit@vmst.io)
Generative AI finally puts the computer in a place a certain kind of technologist has always wanted it to be: an inscrutable oracle, whose operations cannot be questioned even when they produce output that is painfully, obviously wrong
AI Graders: A Professor's Reflection
What assessment bots mean for the future of grading
Insight - Amazon scraps secret AI recruiting tool that showed bias against women
Amazon.com Inc's <AMZN.O> machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
ChatGPT provides false information about people, and OpenAI can’t correct it
noyb today filed a complaint against the ChatGPT maker OpenAI with the Austrian DPA
‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says
Pressure grows on artificial intelligence firms over the content used to train their products
The Dirty Energy Fueling Amazon’s Data Gold Rush
Northern Virginia is grappling with the environmental effects of a booming data-center empire.
AI really is smoke and mirrors
Just not in *exactly* the way you might think.
Der KI-Hype trifft auf die ersten Spuren von Realität
Große Anbieter schrauben klammheimlich ihre Erwartungen zurück, Start-ups machen kein Geld, und dahinter steht ohnehin meist Big Tech. Viele zentrale Fragen sind weiterhin offen
Wie generative KI aus Erinnerungen Fotos macht, die nie existiert haben
Das Projekt "Synthetic Memories" soll Familien auf der ganzen Welt dabei helfen, eine Vergangenheit wiederzuerlangen, die nie per Kamera ...
AI isn't useless. But is it worth it?
AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.
Adam Harvey (@adam_harvey@tldr.nettime.org)
Attached: 4 images Adobe Firefly GenAI won’t let users generate image of person giving the “middle finger” but it does allow the “center finger”
View of The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
Word Embedding Demo: Tutorial
How Large Language Models Work
From zero to ChatGPT
The Scale of the Brain vs Machine Learning
Epistemic status: pretty uncertain. There is a lot of fairly unreliable data in the literature and I make some pretty crude assumptions. Nevertheless, I would be surprised though if my conclusions are more than 1-2 OOMs off though. The brain is currently our sole example of an AGI. Even small...
Do developers still need to learn programming languages in the age of AI?
With the rise of generative AI and no-code/low-code tools, will developers still need to learn how to code?
Forer effect - The Skeptic's Dictionary - Skepdic.com
The LLMentalist Effect: how chat-based Large Language Models rep…
The new era of tech seems to be built on superstitious behaviour
US-Sicherheitsbehörden warnen vor Microsoft Copilot - Hintergründe und Reaktionen
(Bild: frei lizenziert) Erfahre warum US-Sicherheitsbehörden die Nutzung von Microsoft Copilot untersagen und wie Microsoft darauf reagiert. Alle Details hier.
nixCraft 🐧 (@nixCraft@mastodon.social)
Attached: 1 video Sam Altman tells a room full of VCs with a straight face that he will take billions of their money, build AGI, and then ask it how to generate a return! — OpenAI business plan.
How Stability AI’s Founder Tanked His Billion-Dollar Startup
Unpaid bills, bungled contracts and a disastrous meeting with Nvidia's kingmaker CEO. Inside the stunning downfall of Emad Mostaque.
AI’s Ostensible Emergent Abilities Are a Mirage
According to Stanford researchers, large language models are not greater than the sum of their parts.
The Stilwell Brain
There are 100 billion individual neurons in the human brain. Working together, they allow us to make sense of, and move through, the world around us. Scientists have built replicas of the human brain with computers, but no one has ever successfully made a brain out of humans. On this episode, I’ll travel back to my hometown of Stilwell, Kansas, and turn it into a working brain!Available with YouTube Premium - https://www.youtube.com/premium/originals. To see if Premium is available in your country, click here: https://goo.gl/A3HtfP
OpenAI and Meta ready new AI models capable of ‘reasoning’
Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.
Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.
Pluralistic: Humans are not perfectly vigilant (01 Apr 2024) – Pluralistic: Daily links from Cory Doctorow