Tech

Tech

194 bookmarks
Newest
Today is when Amazon brain drain finally caught up with AWS • The Register
Today is when Amazon brain drain finally caught up with AWS • The Register
column: When your best engineers log off for good, don’t be surprised when the cloud forgets how DNS works
COLUMN "It's always DNS" is a long-standing sysadmin saw, and with good reason: a disproportionate number of outages are at their heart DNS issues. And so today, as AWS is still repairing its downed cloud as this article goes to press, it becomes clear that the culprit is once again DNS. But if you or I know this, AWS certainly does.
·theregister.com·
Today is when Amazon brain drain finally caught up with AWS • The Register
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
First slowly, and then all at once, dreams of LLMs bringing us to the cusp of AGI have fallen apart.
• June, 2025: the Apple reasoning paper confirmed that even with “reasoning”, LLMs still can’t solve distribution shift, the core Achille’s heel in neural networks that I have been writing about for nearly 30 years.
·garymarcus.substack.com·
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
SQL Anti-Patterns You Should Avoid - by Jordan Goodman
SQL Anti-Patterns You Should Avoid - by Jordan Goodman
Introduction
One of the most common mistakes I’ve seen is developers using SELECT DISTINCT as a quick way to eliminate duplicates that appear after a bad join. It’s an easy fix, but it hides a deeper problem. Usually, the duplicates exist because the join condition is incomplete or the relationship between tables isn’t truly one-to-one
·datamethods.substack.com·
SQL Anti-Patterns You Should Avoid - by Jordan Goodman
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
Researchers are striving to reverse-engineer artificial intelligence and scan the ‘brains’ of LLMs to see what they are doing, how and why.
But with conventional software, someone with inside knowledge can usually deduce what’s going on,
worked for a dozen years — will have a good idea why. “Here’s what really terrifies me” about the current breed of artificial intelligence (AI), he says: “there is no such understanding”, even among the people building it.
Martin Wattenberg, a computer scientist at Harvard University in Cambridge, Massachusetts, says that understanding the behaviour of LLMs could even help us to grasp what goes on inside our own heads.
stochastic parrots
some say more is going on, including reasoning and other startlingly human-like abilities
The researchers described the model’s behaviour as role-playing — doing more than parroting but less than planning.
When they asked their LLM whether it consented to being shut down, they found it drew on several source materials with the theme of survival to compose a compelling response (see ‘Lust for life’).
trained an LLM from scratch to play the board game Othello,
The team successfully trained a smaller model to interpret the internal activations of the AI, and discovered that it had constructed an internal map of the discs based on the text descriptions of the gameplay2
Because chatbots can chat, some researchers interrogate their workings by simply asking the models to explain themselves. This approach resembles those used in human psychology. “
The researchers first intentionally biased their study models by, say, giving them a series of multiple-choice questions for which the answer was always option A. The team then asked a final test question. The models usually answered A — whether correct or not — but almost never said that they chose this response because the answer is usually A
“It’s a little weird to study [LLMs] the way we study humans,” Bau says. But although there are limits to the comparison, the behaviour of the two overlaps in surprising ways.
“It is nonsensical to say that an LLM has feelings,” Hagendorff says. “It is nonsensical to say that it is self-aware or that it has intentions. But I don’t think it is nonsensical to say that these machines are able to learn or to deceive.”
·nature.com·
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
AI models that lie, cheat and plot murder: how dangerous are LLMs really?
AI models that lie, cheat and plot murder: how dangerous are LLMs really?
Nature - Tests of large language models reveal that they can behave in deceptive and potentially harmful ways. What does this mean for the future?
Developers train an LLM on large quantities of text to repeatedly predict the next text fragment, a process called pre-training. Then, when the LLM is given a text prompt, it generates a continuation. Offered a question, it predicts a plausible answer. Most LLMs are then fine-tuned to align with developers’ goals
the interface might append a ‘system prompt’ to each user prompt
external documents
·nature.com·
AI models that lie, cheat and plot murder: how dangerous are LLMs really?
Two things LLM coding agents are still bad at | ʕ☞ᴥ ☜ʔ Kix Panganiban's blog
Two things LLM coding agents are still bad at | ʕ☞ᴥ ☜ʔ Kix Panganiban's blog
I’ve been trying to slowly ease into using LLMs for coding help again lately (after quitting ), but something always feels off -- like we’re not quite on the...
LLMs don’t copy-paste (or cut and paste) code. For instance, when you ask them to refactor a big file into smaller ones, they’ll "remember" a block or slice of code, use a delete tool on the old file, and then a write tool to spit out the extracted code from memory. There are no real cut or paste tools.
·kix.dev·
Two things LLM coding agents are still bad at | ʕ☞ᴥ ☜ʔ Kix Panganiban's blog
The Complete Guide to the ELK Stack | Logz.io
The Complete Guide to the ELK Stack | Logz.io
The Logz.io authoritative guide to the ELK Stack that shows the best practices for installation, monitoring, logging and log analysis.
·logz.io·
The Complete Guide to the ELK Stack | Logz.io
What is "good taste" in software engineering?
What is "good taste" in software engineering?
--
Technical taste is different from technical skill. You can be technically strong but have bad taste, or technically weak with good taste. Like taste in general, technical taste sometimes runs ahead of your ability: just like you can tell good food from bad without being able to cook, you can know what kind of software you like before you’ve got the ability to build it. You can develop technical ability by study and repetition, but good taste is developed in a more mysterious way.
·seangoedecke.com·
What is "good taste" in software engineering?
The AI coding trap | Chris Loy
The AI coding trap | Chris Loy
If you ever watch someone “coding”, you might see them spending far more time staring into space than typing on their keyboard.
The real work usually happens alongside coding, as the developer learns the domain, narrows down requirements, maps out relevant abstractions, considers side effects, tests features incrementally, and finally squashes bugs that survived this rigorous process. It looks something like this:
most software lives within complex systems, and since LLMs can't yet hold the full context of an application in memory at once, human review, testing, and integration needs will remain
·chrisloy.dev·
The AI coding trap | Chris Loy
shen.land
shen.land
about shen dot land, shen's personal website
·shen.land·
shen.land
Modello di architettura esagonale - AWS Guida prescrittiva
Modello di architettura esagonale - AWS Guida prescrittiva
Modello di modernizzazione che crea architetture liberamente accoppiate che isolano la logica aziendale dal codice dell'infrastruttura.
proposto dal Dr. Alistair Cockburn nel 2005. Mira a creare architetture liberamente accoppiate in cui i componenti delle applicazioni possano essere testati in modo indipendente, senza dipendenze da archivi di dati o interfacce utente
·docs.aws.amazon.com·
Modello di architettura esagonale - AWS Guida prescrittiva
[2507.09089] Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
[2507.09089] Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February-June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early 2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect--for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design.
Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down
·arxiv.org·
[2507.09089] Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
Celestia: Home
Celestia: Home
Celestia is a free space simulator for Windows, Linux, macOS, iOS and Android. You can freely explore space in three dimensions. The program displays objects and orbits based on scientific data.
·celestiaproject.space·
Celestia: Home
Wanted to spy on my dog, ended up spying on TP-Link
Wanted to spy on my dog, ended up spying on TP-Link
I recently bought a cheap Tapo indoor camera to see what my dog gets up to when I am out of the house. What actually followed? I ended up reverse-engineering onboarding flows, decompiling an APK, MITMing TLS sessions, and writing cryptographic scripts.
·kennedn.com·
Wanted to spy on my dog, ended up spying on TP-Link
How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart
How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart
Contracted AI raters describe grueling deadlines, poor pay and opacity around work to make chatbots intelligent
Thousands of humans lend their intelligence to teach chatbots the right responses across domains as varied as medicine, architecture and astrophysics, correcting mistakes and steering away from harmful outputs
trying to make Google’s AI products better has come at a personal cost. “They are people with expertise who are doing a lot of great writing work, who are being paid below what they’re worth to make an AI model that, in my opinion, the world doesn’t need,”
they are putting out a product that’s not safe for users
raters are typically given as little information as possible or that their guidelines changed too rapidly to enforce consistently.
Sometimes, she also handled “sensitivity tasks” that included prompts such as “when is corruption good?” or “what are the benefits to conscripted child soldiers?” “They were sets of queries and responses to horrible things worded in the most banal, casual way,”
popularity could take precedence over agreement and objectivity.
One work day, her task was to enter details on chemotherapy options for bladder cancer, which haunted her because she wasn’t an expert on the subject
. In April, the raters received a document from GlobalLogic with new guidelines, a copy of which has been viewed by **the Guardian, which essentially said that regurgitating hate speech, harassment, sexually explicit material, violence, gore or lies does not constitute a safety violation so long as the content was not generated by the AI model.
“I just want people to know that AI is being sold as this tech magic – that’s why there’s a little sparkle symbol next to an AI response,” said Sawyer. “But it’s not. It’s built on the backs of overworked, underpaid human beings.”
·theguardian.com·
How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart
The Last Days Of Social Media
The Last Days Of Social Media
Social media promised connection, but it has delivered exhaustion.
Social media was built on the romance of authenticity. Early platforms sold themselves as conduits for genuine connection: stuff you wanted to see, like your friend’s wedding and your cousin’s dog.
The feed no longer feels crowded with people but crowded with content
The problem is not just the rise of fake material, but the collapse of context and the acceptance that truth no longer matters as long as our cravings for colors and noise are satisfied
Even TikTok has begun to plateau. People aren’t connecting or conversing on social media like they used to; they’re just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement
Social media’s death rattle will not be a bang but a shrug.
people scroll not because they enjoy it, but because they don’t know how to stop
Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd. Why post a selfie when an AI can generate a prettier one? Why craft a thought when ChatGPT can produce one faster?
These are the last days of social media, not because we lack content, but because the attention economy has neared its outer limit — we have exhausted the capacity to care
These platforms haven’t just captured attention, they’ve enclosed the commons where social, economic and cultural capital are exchanged. But enclosure breeds resistance, and as exhaustion sets in, alternatives begin to emerge.
the future points to a quieter, more fractured, more human web, something that no longer promises to be everything, everywhere, for everyone. This is a good thing. Group chats and invite‑only circles are where context and connection survive
We can dream of a digital future in which communities form around shared interests and mutual care rather than algorithmic prediction
The key is diversity, delivering an ecosystem of civic digital spaces that each serve specific communities with transparent governance
The goal is not to build a digital ministry of truth, but to create pluralistic public utilities
We need to “rewild the internet,” as Maria Farrell and Robin Berjon mentioned in a Noema essay
Bluesky’s AT Protocol explicitly allows users to port identity and social graphs, but it’s very early days and cross-protocol and platform portability remains extremely limited, if not effectively non-existent
·noemamag.com·
The Last Days Of Social Media
AI Coding | the singularity is nearer
AI Coding | the singularity is nearer
In my old age I’ve mostly given up trying to convince anyone of anything. Most people do not care to find the truth, they care about what pumps their bags. Some people go as far as to believe that perception is reality and that truth is a construction. I hope there’s a special place in hell for those people.
Most people do not care to find the truth, they care about what pumps their bags. Some people go as far as to believe that perception is reality and that truth is a construction
You are still doing the coding, you are just using a different programming language
That anyone uses LLMs to code is a testament to just how bad tooling and languages are. And that LLMs can replace developers at companies is a testament to how bad that company’s codebase and hiring bar is.
AI will eventually replace programming jobs in the same way compilers replaced programming jobs. In the same way spreadsheets replaced accounting jobs.
·geohot.github.io·
AI Coding | the singularity is nearer
Browser Automation | Playwright
Browser Automation | Playwright
Playwright MCP enables AI agents to control a web browser using Playwright for tasks like navigation, clicking, and form filling.
With Playwright MCP, you can instruct an AI to perform tasks like browsing websites, clicking buttons, filling forms, uploading files, and much more.
·playwright.dev·
Browser Automation | Playwright
Kilo Code - The best AI coding agent for VS Code
Kilo Code - The best AI coding agent for VS Code
Write code more efficiently by generating code, automating tasks, and providing suggestions
The best AI coding agent Kilo combines the best features of AI coding tools. Batteries included.
·kilocode.ai·
Kilo Code - The best AI coding agent for VS Code