no comments

263 bookmarks
Custom sorting
Vibe Coding in Practice: Motivations, Challenges, and a Future...
Vibe Coding in Practice: Motivations, Challenges, and a Future...
AI code generation tools are transforming software development, especially for novice and non-software developers, by enabling them to write code and build applications faster and with little to no human intervention. Vibe coding is the practice where users rely on AI code generation tools through intuition and trial-and-error without necessarily understanding the underlying code. Despite widespread adoption, no research has systematically investigated why users engage in vibe coding, what they experience while doing so, and how they approach quality assurance (QA) and perceive the quality of the AI-generated code. To this end, we conduct a systematic grey literature review of 101 practitioner sources, extracting 518 firsthand behavioral accounts about vibe coding practices, challenges, and limitations. Our analysis reveals a speed-quality trade-off paradox, where vibe coders are motivated by speed and accessibility, often experiencing rapid ``instant success and flow'', yet most perceive the resulting code as fast but flawed. QA practices are frequently overlooked, with many skipping testing, relying on the models' or tools' outputs without modification, or delegating checks back to the AI code generation tools. This creates a new class of vulnerable software developers, particularly those who build a product but are unable to debug it when issues arise. We argue that vibe coding lowers barriers and accelerates prototyping, but at the cost of reliability and maintainability. These insights carry implications for tool designers and software development teams. Understanding how vibe coding is practiced today is crucial for guiding its responsible use and preventing a broader QA crisis in AI-assisted development.
·arxiv.org·
Vibe Coding in Practice: Motivations, Challenges, and a Future...
Colf
Colf
Prompt solutions to algorithmic problems with the fewest tokens.
·colf.dev·
Colf
Writing is thinking
Writing is thinking
On the value of human-generated scientific writing in the age of large-language models.
·nature.com·
Writing is thinking
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
·up.raindrop.io·
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
Is there a Half-Life for the Success Rates of AI Agents?
Is there a Half-Life for the Success Rates of AI Agents?
Building on the recent empirical work of Kwa et al. (2025), I show that within their suite of research-engineering tasks the performance of AI agents on longer-duration tasks can be explained by an extremely simple mathematical model — a constant rate of failing during each minute a human would take
·tobyord.com·
Is there a Half-Life for the Success Rates of AI Agents?
Who is using AI to code? Global diffusion and impact of generative AI
Who is using AI to code? Global diffusion and impact of generative AI
Generative coding tools promise big productivity gains, but uneven uptake could widen skill and income gaps. We train a neural classifier to spot AI-generated Python functions in 80 million GitHub commits (2018-2024) by 200,000 developers and track how fast--and where--these tools take hold. By December 2024, AI wrote an estimated 30.1% of Python functions from U.S. contributors, versus 24.3% in Germany, 23.2% in France, 21.6% in India, 15.4% in Russia and 11.7% in China. Newer GitHub users use AI more than veterans, while male and female developers adopt at similar rates. Within-developer fixed-effects models show that moving to 30% AI use raises quarterly commits by 2.4%. Coupling this effect with occupational task and wage data puts the annual value of AI-assisted coding in the United States at $9.6-$14.4 billion, rising to $64-$96 billion if we assume higher estimates of productivity effects reported by randomized control trials. Moreover, generative AI prompts learning and innovation, leading to increases in the number of new libraries and library combinations that programmers use. In short, AI usage is already widespread but highly uneven, and the intensity of use, not only access, drives measurable gains in output and exploration.
·arxiv.org·
Who is using AI to code? Global diffusion and impact of generative AI
PAC-MAN 45 Years Of Impact | Official Site
PAC-MAN 45 Years Of Impact | Official Site
For 45 years, PAC-MAN has made an impact on the world by connecting with people across ages, borders and genders through the power of play.
·bandainamcoent.com·
PAC-MAN 45 Years Of Impact | Official Site
Harnessing the Universal Geometry of Embeddings
Harnessing the Universal Geometry of Embeddings
We introduce the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches. Our unsupervised approach translates any embedding to and from a universal latent representation (i.e., a universal semantic structure conjectured by the Platonic Representation Hypothesis). Our translations achieve high cosine similarity across model pairs with different architectures, parameter counts, and training datasets. The ability to translate unknown embeddings into a different space while preserving their geometry has serious implications for the security of vector databases. An adversary with access only to embedding vectors can extract sensitive information about the underlying documents, sufficient for classification and attribute inference.
·arxiv.org·
Harnessing the Universal Geometry of Embeddings