Tech

Tech

208 bookmarks
Newest
LukeW | Designing Perplexity
LukeW | Designing Perplexity
In his AI Speaker Series presentation at Sutter Hill Ventures, Henry Modisett, Head of Design at Perplexity, shared insights on designing AI products and the ev...
Technological innovation is outpacing our ability to thoughtfully apply it
opinionated
·lukew.com·
LukeW | Designing Perplexity
Have You Checked Out ServerVerify Yet?! The Next Generation of Server Benchmarks is Here! - LowEndBox
Have You Checked Out ServerVerify Yet?! The Next Generation of Server Benchmarks is Here! - LowEndBox
The LowEnd community is pleased to announce the launch of our new site, ServerVerify! This project, nearly two years in the making, brings the next generation of server benchmarks to our community.  Based around the ubiquitous YABS benchmark, ServerVerify  brings together thousands of YABS with comprehensive, real user reviews and our proprietary score, all in a searchable database.
·lowendbox.com·
Have You Checked Out ServerVerify Yet?! The Next Generation of Server Benchmarks is Here! - LowEndBox
L'Europa ha scritto un algoritmo per decidere quanto un cloud è sovrano e sganciarsi dalle big tech. Ma è più facile a dirsi che a farsi
L'Europa ha scritto un algoritmo per decidere quanto un cloud è sovrano e sganciarsi dalle big tech. Ma è più facile a dirsi che a farsi
La Commissione ha stabilito 8 obiettivi per raggiungere l'indipendenza dai fornitori Usa. Ma ci sono ostacoli tecnologici che rischiano di far fallire l'operazione
Riuscirà un algoritmo a trovare la strada verso quel cloud “sovrano” che da anni l'Unione europea insegue, senza successo? A stabilirlo sarà l'esito
·wired.it·
L'Europa ha scritto un algoritmo per decidere quanto un cloud è sovrano e sganciarsi dalle big tech. Ma è più facile a dirsi che a farsi
PostgREST Documentation — PostgREST 12.2 documentation
PostgREST Documentation — PostgREST 12.2 documentation
, PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations. Sponsors:,, Database as Single Source of Truth: Using PostgREST is an alternative to manua...
PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations.
·docs.postgrest.org·
PostgREST Documentation — PostgREST 12.2 documentation
I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance | Loren Stewart
I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance | Loren Stewart
I needed to choose a framework for a mobile-first app at work. I started comparing Next.js, SolidStart, and SvelteKit, then expanded to 10 frameworks. Here's what I discovered about bundle sizes, performance, and the real cost of framework choices.
Next-Gen Frameworks Deliver Instant Performance: Marko (39ms), SolidStart (35ms), SvelteKit (38ms), and Nuxt (38ms) all achieve essentially instant First Contentful Paint in the 35-39ms range. This is 12 to 13 times faster than Next.js at 467ms.
While site downtime causes 9% permanent user abandonment, slow performance causes 28% permanent abandonment. That’s over 3x worse. Even more revealing: slowdowns occur 10x more frequently than outages, resulting in roughly 2x total revenue impact despite lower per-hour costs
·lorenstew.art·
I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance | Loren Stewart
Che cos'è il Basilisco di Roko, l'oscura teoria che ha fatto incontrare Elon Musk e Grimes
Che cos'è il Basilisco di Roko, l'oscura teoria che ha fatto incontrare Elon Musk e Grimes
Elon Musk ha trasformato un vecchio esperimento mentale su una super-intelligenza artificiale killer in una pickup line.
L’esperimento mentale è stato inizialmente pubblicato da Less Wrong — un forum e blog su razionalità, psicologia e intelligenza artificiale in generale — nel luglio 2010 da un utente chiamato Roko. Al livello più base del tutto, l’esperimento mentale parla delle condizioni in cui sarebbe razionale per una futura super-intelligenza artificiale uccidere gli umani che non hanno contribuito alla sua creazione.
se una super-intelligenza decide le sue azioni per fare sì che siano quanto più positive possibili per il benessere degli esseri umani, allora non si fermerà mai perché le cose possono sempre andare un po’ meglio.
Chiunque non stia lavorando per creare quella macchina si sta frapponendo al progresso e dovrebbe essere eliminato per riuscire a raggiungere più rapidamente quell’obiettivo.
ora che lo avete letto, siete tecnicamente coinvolti in esso. Non avete più una scusa per non contribuire alla creazione di questa super-intelligenza artificiale e se scegliete di non farlo, sarete uno dei primi obiettivi dell’intelligenza artificiale.
Quando Roko ha pubblicato la sua teoria del basilisco su Less Wrong, Yudkowsky si è piuttosto spazientito, a tal punto da cancellare il post e vietare ogni tipo di discussione sul basilisco dal forum per cinque anni. Quando Yudkowsky ha spiegato le sue azioni sulla subreddit Futurology qualche anno dopo il post originale, ha affermato che l’esperimento mentale di Roko dava per scontato di riuscire a superare una serie di ostacoli tecnici nella teoria delle decisioni, e anche se ciò fosse possibile, Roko stava efficacemente diffondendo delle idee pericolose, proprio come i cattivi della storia di Langford.
·vice.com·
Che cos'è il Basilisco di Roko, l'oscura teoria che ha fatto incontrare Elon Musk e Grimes
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC
31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
20% contained major accuracy issues, including hallucinated details and outdated information.
·bbc.co.uk·
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
The Internet's Biggest Annoyance:Why Cookie Laws Should Target Browsers, Not Websites | NEDNEX
The Internet's Biggest Annoyance:Why Cookie Laws Should Target Browsers, Not Websites | NEDNEX
Save and Share: Click. Ugh. Another one. You know the drill. You land on a new website, eager to read an article or check a product price, and before the page even finishes loading, it appears: the dreaded cookie banner. A pop-up, a slide-in, a full-screen overlay demanding you “Accept All,” “Manage Preferences,” or navigate… Continue reading The Internet’s Biggest Annoyance:Why Cookie Laws Should Target Browsers, Not Websites
Imagine if every time you got into your car, you had to manually approve the engine's use of oil, the tires' use of air, and the radio's use of electricity. It’s absurd, right? You’d set your preferences once, and the car would just work.
It Doesn't Actually Give Us Control: The illusion of choice is not choice. When the options are "Accept All" or "Spend Five Minutes in a Menu of Legalese," the system is designed to push you toward the path of least resistance.
·nednex.com·
The Internet's Biggest Annoyance:Why Cookie Laws Should Target Browsers, Not Websites | NEDNEX
Today is when Amazon brain drain finally caught up with AWS • The Register
Today is when Amazon brain drain finally caught up with AWS • The Register
column: When your best engineers log off for good, don’t be surprised when the cloud forgets how DNS works
COLUMN "It's always DNS" is a long-standing sysadmin saw, and with good reason: a disproportionate number of outages are at their heart DNS issues. And so today, as AWS is still repairing its downed cloud as this article goes to press, it becomes clear that the culprit is once again DNS. But if you or I know this, AWS certainly does.
·theregister.com·
Today is when Amazon brain drain finally caught up with AWS • The Register
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
First slowly, and then all at once, dreams of LLMs bringing us to the cusp of AGI have fallen apart.
• June, 2025: the Apple reasoning paper confirmed that even with “reasoning”, LLMs still can’t solve distribution shift, the core Achille’s heel in neural networks that I have been writing about for nearly 30 years.
·garymarcus.substack.com·
Game over. AGI is not imminent, and LLMs are not the royal road to getting there.
SQL Anti-Patterns You Should Avoid - by Jordan Goodman
SQL Anti-Patterns You Should Avoid - by Jordan Goodman
Introduction
One of the most common mistakes I’ve seen is developers using SELECT DISTINCT as a quick way to eliminate duplicates that appear after a bad join. It’s an easy fix, but it hides a deeper problem. Usually, the duplicates exist because the join condition is incomplete or the relationship between tables isn’t truly one-to-one
·datamethods.substack.com·
SQL Anti-Patterns You Should Avoid - by Jordan Goodman
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
Researchers are striving to reverse-engineer artificial intelligence and scan the ‘brains’ of LLMs to see what they are doing, how and why.
But with conventional software, someone with inside knowledge can usually deduce what’s going on,
worked for a dozen years — will have a good idea why. “Here’s what really terrifies me” about the current breed of artificial intelligence (AI), he says: “there is no such understanding”, even among the people building it.
Martin Wattenberg, a computer scientist at Harvard University in Cambridge, Massachusetts, says that understanding the behaviour of LLMs could even help us to grasp what goes on inside our own heads.
stochastic parrots
some say more is going on, including reasoning and other startlingly human-like abilities
The researchers described the model’s behaviour as role-playing — doing more than parroting but less than planning.
When they asked their LLM whether it consented to being shut down, they found it drew on several source materials with the theme of survival to compose a compelling response (see ‘Lust for life’).
trained an LLM from scratch to play the board game Othello,
The team successfully trained a smaller model to interpret the internal activations of the AI, and discovered that it had constructed an internal map of the discs based on the text descriptions of the gameplay2
Because chatbots can chat, some researchers interrogate their workings by simply asking the models to explain themselves. This approach resembles those used in human psychology. “
The researchers first intentionally biased their study models by, say, giving them a series of multiple-choice questions for which the answer was always option A. The team then asked a final test question. The models usually answered A — whether correct or not — but almost never said that they chose this response because the answer is usually A
“It’s a little weird to study [LLMs] the way we study humans,” Bau says. But although there are limits to the comparison, the behaviour of the two overlaps in surprising ways.
“It is nonsensical to say that an LLM has feelings,” Hagendorff says. “It is nonsensical to say that it is self-aware or that it has intentions. But I don’t think it is nonsensical to say that these machines are able to learn or to deceive.”
·nature.com·
How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models
AI models that lie, cheat and plot murder: how dangerous are LLMs really?
AI models that lie, cheat and plot murder: how dangerous are LLMs really?
Nature - Tests of large language models reveal that they can behave in deceptive and potentially harmful ways. What does this mean for the future?
Developers train an LLM on large quantities of text to repeatedly predict the next text fragment, a process called pre-training. Then, when the LLM is given a text prompt, it generates a continuation. Offered a question, it predicts a plausible answer. Most LLMs are then fine-tuned to align with developers’ goals
the interface might append a ‘system prompt’ to each user prompt
external documents
·nature.com·
AI models that lie, cheat and plot murder: how dangerous are LLMs really?
Two things LLM coding agents are still bad at | ʕ☞ᴥ ☜ʔ Kix Panganiban's blog
Two things LLM coding agents are still bad at | ʕ☞ᴥ ☜ʔ Kix Panganiban's blog
I’ve been trying to slowly ease into using LLMs for coding help again lately (after quitting ), but something always feels off -- like we’re not quite on the...
LLMs don’t copy-paste (or cut and paste) code. For instance, when you ask them to refactor a big file into smaller ones, they’ll "remember" a block or slice of code, use a delete tool on the old file, and then a write tool to spit out the extracted code from memory. There are no real cut or paste tools.
·kix.dev·
Two things LLM coding agents are still bad at | ʕ☞ᴥ ☜ʔ Kix Panganiban's blog
The Complete Guide to the ELK Stack | Logz.io
The Complete Guide to the ELK Stack | Logz.io
The Logz.io authoritative guide to the ELK Stack that shows the best practices for installation, monitoring, logging and log analysis.
·logz.io·
The Complete Guide to the ELK Stack | Logz.io
What is "good taste" in software engineering?
What is "good taste" in software engineering?
--
Technical taste is different from technical skill. You can be technically strong but have bad taste, or technically weak with good taste. Like taste in general, technical taste sometimes runs ahead of your ability: just like you can tell good food from bad without being able to cook, you can know what kind of software you like before you’ve got the ability to build it. You can develop technical ability by study and repetition, but good taste is developed in a more mysterious way.
·seangoedecke.com·
What is "good taste" in software engineering?
The AI coding trap | Chris Loy
The AI coding trap | Chris Loy
If you ever watch someone “coding”, you might see them spending far more time staring into space than typing on their keyboard.
The real work usually happens alongside coding, as the developer learns the domain, narrows down requirements, maps out relevant abstractions, considers side effects, tests features incrementally, and finally squashes bugs that survived this rigorous process. It looks something like this:
most software lives within complex systems, and since LLMs can't yet hold the full context of an application in memory at once, human review, testing, and integration needs will remain
·chrisloy.dev·
The AI coding trap | Chris Loy