Found 5 bookmarks
Newest
The LLMentalist Effect: how chat-based Large Language Models rep…
The LLMentalist Effect: how chat-based Large Language Models rep…
The new era of tech seems to be built on superstitious behaviour
The intelligence illusion seems to be based on the same mechanism as that of a psychic’s con, often called cold reading. It looks like an accidental automation of the same basic tactic.
The chatbot gives the impression of an intelligence that is specifically engaging with you and your work, but that impression is nothing more than a statistical trick.
People sceptical about "AI" chatbots are less likely to use them. Those who actively don't disbelieve the possibility of chatbot "intelligence" won't get pulled in by the bot. The most active audience will be early adopters, tech enthusiasts, and genuine believers in AGI who will all generally be less critical and more open-minded.
The chatbot’s answers sound extremely specific to the current context but are in fact statistically generic. The mathematical model behind the chatbot delivers a statistically plausible response to the question. The marks that find this convincing get pulled in.
The warnings also play a role in setting the stage. “It’s early days” means that when the statistically generic nature of the response is spotted, it’s easily dismissed as an “error”. Anthropomorphising concepts such as using “hallucination” as a term help dismiss the fact that statistical responses are completely disconnected from meaning and facts. The hype and mythology of AI primes the audience to think of these systems as persons to be understood and engaged with, all but guaranteeing subjective validation.
·softwarecrisis.dev·
The LLMentalist Effect: how chat-based Large Language Models rep…
AI & the Web: Understanding and managing the impact of Machine Learning models on the Web
AI & the Web: Understanding and managing the impact of Machine Learning models on the Web
This document proposes an analysis of the systemic impact of AI systems, and in particular ones based on Machine Learning models, on the Web, and the role that Web standardization may play in managing that impact.
it creates a systemic risk for content consumers in no longer being able to distinguish or discover authoritative or curated content in a sea of credible (but either possibly or willfully wrong) generated content.
We do not know of any solution that could guarantee (e.g., through cryptography) that a given piece of content was or was not generated (partially or entirely) by AI systems.
A well-known issue with relying operationally on Machine Learning models is that they will integrate and possibly strengthen any bias ("systematic difference in treatment of certain objects, people or groups in comparison to others" [[[ISO/IEC-22989]) in the data that was used during their training.
Models trained on un-triaged or partially triaged content off the Web are bound to include personally identifiable information (PII). The same is true for models trained on data that users have chosen to share (for public consumption or not) with service providers. These models can often be made to retrieve and share that information with any user who knows how to ask, which breaks expectations of privacy for those whose personal information was collected, and is likely to be in breach with privacy regulations in a number of jurisdictions.
A number of Machine Learning models have significantly lowered the cost of generating credible textual, as well as audio and video (real-time or recorded) impersonations of real persons. This creates significant risks of upscaling the capabilities of phishing and other types of frauds, and thus raising much higher the barriers to establish trust in online interactions. If users no longer feel safe in their digitally-mediated interactions, the Web will no longer be able to play its role as a platform for these interactions.
Training and running Machine Learning models can prove very resource-intensive, in particular in terms of power- and water-consumption.
Some of the largest and most visible Machine Learning models are known or assumed to have been trained with materials crawled from the Web, without the explicit consent of their creators or publishers.
·w3.org·
AI & the Web: Understanding and managing the impact of Machine Learning models on the Web
It’s Humans All the Way Down
It’s Humans All the Way Down
Writing about the big beautiful mess that is making things for the world wide web.
·blog.jim-nielsen.com·
It’s Humans All the Way Down
The Cost of a Tool - Edward Loveall
The Cost of a Tool - Edward Loveall
Maybe you think calling ChatGPT/Stable Diffusion/etc “weapons” is too extreme. “Actual weapons are made for the purpose of causing harm and something that may cause harm should be in a different category,” you say. But I say: if a tool is stealing work, denying healthcare, perpetuating sexism, racism, and erasure, and incentivizing layoffs, splitting hairs over what category we put it in misses the point.
I’m not saying all computers or algorithms are bad. Should we ban hammers because they can potentially be used as weapons? No. But if every time I hammered a nail it also broke someone’s hand, caused someone to have a mental breakdown, or spread misinformation, I would find a different hammer. Especially if that hammer built houses with more vulnerabilities on average.
·blog.edwardloveall.com·
The Cost of a Tool - Edward Loveall
Inside the World of TikTok Spammers and the AI Tools That Enable Them
Inside the World of TikTok Spammers and the AI Tools That Enable Them
This is where AI generated formats, Minecraft splitscreens, Reddit stories, 'Would You Rather' videos, and deep sea story spam come from.
This strategy, the influencers say, allows them to passively make $10,000 a month by flooding social media platforms with stolen and low-effort clips while working from private helicopters, the beach, the ski slope, a park, etc. What I found was a complex ecosystem of content parasitism, with thousands of people using a variety of AI tools to make low-quality spammy videos that recycle Reddit AMAs, weird “Would You Rather” games, AI-narrated “scary ocean” clips, ChatGPT-generated fun facts, slideshows of tweets, clips lifted from celebrities, YouTubers, and podcasts.
The easiest and most common way to go viral on TikTok, Mustafa explains in one unlisted video, is to steal content from famous content creators and repost it.
·404media.co·
Inside the World of TikTok Spammers and the AI Tools That Enable Them