Saved

Saved

#apps
The group chats that changed America | Semafor
The group chats that changed America | Semafor
“It’s the same thing happening on both sides, and I’ve been amazed at how much this is coordinating our reality,” said the writer Thomas Chatterton Williams, who was for a time a member of a group chat with Andreessen. “If you weren’t in the business at all, you’d think everyone was arriving at conclusions independently — and [they’re] not. It’s a small group of people who talk to each other and overlap between politics and journalism and a few industries.”
The political journalist Mark Halperin, who now runs 2WAY and has a show on Megyn Kelly’s network, said it was remarkable that “the left seems largely unaware that some of the smartest and most sophisticated Trump supporters in the nation from coast to coast are part of an overlapping set of text chains that allow their members to share links, intel, tactics, strategy, and ad hoc assignments. Also: clever and invigorating jokes. And they do this (not kidding) like 20 hours a day, including on weekends.” He called their influence “substantial.”
·semafor.com·
The group chats that changed America | Semafor
The Trump Administration Accidentally Texted Me Its War Plans
The Trump Administration Accidentally Texted Me Its War Plans
The term principals committee generally refers to a group of the senior-most national-security officials, including the secretaries of defense, state, and the treasury, as well as the director of the CIA. It should go without saying—but I’ll say it anyway—that I have never been invited to a White House principals-committee meeting, and that, in my many years of reporting on national-security matters, I had never heard of one being convened over a commercial messaging app.
On Tuesday, March 11, I received a connection request on Signal from a user identified as Michael Waltz. Signal is an open-source encrypted messaging service popular with journalists and others who seek more privacy than other text-messaging services are capable of delivering. I assumed that the Michael Waltz in question was President Donald Trump’s national security adviser. I did not assume, however, that the request was from the actual Michael Waltz.
I accepted the connection request, hoping that this was the actual national security adviser, and that he wanted to chat about Ukraine, or Iran, or some other important matter. Two days later—Thursday—at 4:28 p.m., I received a notice that I was to be included in a Signal chat group. It was called the “Houthi PC small group.”
We discussed the possibility that these texts were part of a disinformation campaign, initiated by either a foreign intelligence service or, more likely, a media-gadfly organization, the sort of group that attempts to place journalists in embarrassing positions, and sometimes succeeds. I had very strong doubts that this text group was real, because I could not believe that the national-security leadership of the United States would communicate on Signal about imminent war plans. I also could not believe that the national security adviser to the president would be so reckless as to include the editor in chief of The Atlantic in such discussions with senior U.S. officials, up to and including the vice president.
I was still concerned that this could be a disinformation operation, or a simulation of some sort. And I remained mystified that no one in the group seemed to have noticed my presence. But if it was a hoax, the quality of mimicry and the level of foreign-policy insight were impressive.
According to the lengthy Hegseth text, the first detonations in Yemen would be felt two hours hence, at 1:45 p.m. eastern time. So I waited in my car in a supermarket parking lot. If this Signal chat was real, I reasoned, Houthi targets would soon be bombed. At about 1:55, I checked X and searched Yemen. Explosions were then being heard across Sanaa, the capital city. I went back to the Signal channel. At 1:48, “Michael Waltz” had provided the group an update. Again, I won’t quote from this text, except to note that he described the operation as an “amazing job.” A few minutes later, “John Ratcliffe” wrote, “A good start.” Not long after, Waltz responded with three emoji: a fist, an American flag, and fire. Others soon joined in, including “MAR,” who wrote, “Good Job Pete and your team!!,” and “Susie Wiles,” who texted, “Kudos to all – most particularly those in theater and CENTCOM! Really great. God bless.” “Steve Witkoff” responded with five emoji: two hands-praying, a flexed bicep, and two American flags. “TG” responded, “Great work and effects!” The after-action discussion included assessments of damage done, including the likely death of a specific individual. The Houthi-run Yemeni health ministry reported that at least 53 people were killed in the strikes, a number that has not been independently verified.
In an email, I outlined some of my questions: Is the “Houthi PC small group” a genuine Signal thread? Did they know that I was included in this group? Was I (on the off chance) included on purpose? If not, who did they think I was? Did anyone realize who I was when I was added, or when I removed myself from the group? Do senior Trump-administration officials use Signal regularly for sensitive discussions? Do the officials believe that the use of such a channel could endanger American personnel?
William Martin, a spokesperson for Vance, said that despite the impression created by the texts, the vice president is fully aligned with the president. “The Vice President’s first priority is always making sure that the President’s advisers are adequately briefing him on the substance of their internal deliberations,” he said. “Vice President Vance unequivocally supports this administration’s foreign policy. The President and the Vice President have had subsequent conversations about this matter and are in complete agreement.”
It is not uncommon for national-security officials to communicate on Signal. But the app is used primarily for meeting planning and other logistical matters—not for detailed and highly confidential discussions of a pending military action. And, of course, I’ve never heard of an instance in which a journalist has been invited to such a discussion.
Conceivably, Waltz, by coordinating a national-security-related action over Signal, may have violated several provisions of the Espionage Act, which governs the handling of “national defense” information, according to several national-security lawyers interviewed by my colleague Shane Harris for this story. Harris asked them to consider a hypothetical scenario in which a senior U.S. official creates a Signal thread for the express purpose of sharing information with Cabinet officials about an active military operation. He did not show them the actual Signal messages or tell them specifically what had occurred. All of these lawyers said that a U.S. official should not establish a Signal thread in the first place. Information about an active operation would presumably fit the law’s definition of “national defense” information. The Signal app is not approved by the government for sharing classified information. The government has its own systems for that purpose. If officials want to discuss military activity, they should go into a specially designed space known as a sensitive compartmented information facility, or SCIF—most Cabinet-level national-security officials have one installed in their home—or communicate only on approved government equipment, the lawyers said.
Normally, cellphones are not permitted inside a SCIF, which suggests that as these officials were sharing information about an active military operation, they could have been moving around in public. Had they lost their phones, or had they been stolen, the potential risk to national security would have been severe.
There was another potential problem: Waltz set some of the messages in the Signal group to disappear after one week, and some after four. That raises questions about whether the officials may have violated federal records law: Text messages about official acts are considered records that should be preserved.
“Intentional violations of these requirements are a basis for disciplinary action. Additionally, agencies such as the Department of Defense restrict electronic messaging containing classified information to classified government networks and/or networks with government-approved encrypted features,” Baron said.
It is worth noting that Donald Trump, as a candidate for president (and as president), repeatedly and vociferously demanded that Hillary Clinton be imprisoned for using a private email server for official business when she was secretary of state. (It is also worth noting that Trump was indicted in 2023 for mishandling classified documents, but the charges were dropped after his election.)
Waltz and the other Cabinet-level officials were already potentially violating government policy and the law simply by texting one another about the operation. But when Waltz added a journalist—presumably by mistake—to his principals committee, he created new security and legal issues. Now the group was transmitting information to someone not authorized to receive it. That is the classic definition of a leak, even if it was unintentional, and even if the recipient of the leak did not actually believe it was a leak until Yemen came under American attack.
·theatlantic.com·
The Trump Administration Accidentally Texted Me Its War Plans
Consider the Plight of the VC-Backed Privacy Burglars
Consider the Plight of the VC-Backed Privacy Burglars
Also, even putting aside the fact that first-party apps necessarily have certain advantages third-party apps do not (otherwise, there’d be no distinction), apps from the same developer have broad permission to share data and resources via app groups. Gmail can talk to Google Calendar, and Google Calendar has full access to Gmail’s address book. It’s no more “fundamentally anticompetitive” for Messages and Apple Mail to have full access to your Contacts address book than it was for Meta to launch Threads by piggybacking on the existing accounts and social graph of Instagram. If it’s unfair, it’s only unfair in the way that life in general is unfair.
·daringfireball.net·
Consider the Plight of the VC-Backed Privacy Burglars
Malleable software in the age of LLMs
Malleable software in the age of LLMs
Historically, end-user programming efforts have been limited by the difficulty of turning informal user intent into executable code, but LLMs can help open up this programming bottleneck. However, user interfaces still matter, and while chatbots have their place, they are an essentially limited interaction mode. An intriguing way forward is to combine LLMs with open-ended, user-moldable computational media, where the AI acts as an assistant to help users directly manipulate and extend their tools over time.
LLMs will represent a step change in tool support for end-user programming: the ability of normal people to fully harness the general power of computers without resorting to the complexity of normal programming. Until now, that vision has been bottlenecked on turning fuzzy informal intent into formal, executable code; now that bottleneck is rapidly opening up thanks to LLMs.
If this hypothesis indeed comes true, we might start to see some surprising changes in the way people use software: One-off scripts: Normal computer users have their AI create and execute scripts dozens of times a day, to perform tasks like data analysis, video editing, or automating tedious tasks. One-off GUIs: People use AI to create entire GUI applications just for performing a single specific task—containing just the features they need, no bloat. Build don’t buy: Businesses develop more software in-house that meets their custom needs, rather than buying SaaS off the shelf, since it’s now cheaper to get software tailored to the use case. Modding/extensions: Consumers and businesses demand the ability to extend and mod their existing software, since it’s now easier to specify a new feature or a tweak to match a user’s workflow. Recombination: Take the best parts of the different applications you like best, and create a new hybrid that composes them together.
Chat will never feel like driving a car, no matter how good the bot is. In their 1986 book Understanding Computers and Cognition, Terry Winograd and Fernando Flores elaborate on this point: In driving a car, the control interaction is normally transparent. You do not think “How far should I turn the steering wheel to go around that curve?” In fact, you are not even aware (unless something intrudes) of using a steering wheel…The long evolution of the design of automobiles has led to this readiness-to-hand. It is not achieved by having a car communicate like a person, but by providing the right coupling between the driver and action in the relevant domain (motion down the road).
Think about how a spreadsheet works. If you have a financial model in a spreadsheet, you can try changing a number in a cell to assess a scenario—this is the inner loop of direct manipulation at work. But, you can also edit the formulas! A spreadsheet isn’t just an “app” focused on a specific task; it’s closer to a general computational medium which lets you flexibly express many kinds of tasks. The “platform developers"—the creators of the spreadsheet—have given you a set of general primitives that can be used to make many tools. We might draw the double loop of the spreadsheet interaction like this. You can edit numbers in the spreadsheet, but you can also edit formulas, which edits the tool
what if you had an LLM play the role of the local developer? That is, the user mainly drives the creation of the spreadsheet, but asks for technical help with some of the formulas when needed? The LLM wouldn’t just create an entire solution, it would also teach the user how to create the solution themselves next time.
This picture shows a world that I find pretty compelling. There’s an inner interaction loop that takes advantage of the full power of direct manipulation. There’s an outer loop where the user can also more deeply edit their tools within an open-ended medium. They can get AI support for making tool edits, and grow their own capacity to work in the medium. Over time, they can learn things like the basics of formulas, or how a VLOOKUP works. This structural knowledge helps the user think of possible use cases for the tool, and also helps them audit the output from the LLMs. In a ChatGPT world, the user is left entirely dependent on the AI, without any understanding of its inner mechanism. In a computational medium with AI as assistant, the user’s reliance on the AI gently decreases over time as they become more comfortable in the medium.
·geoffreylitt.com·
Malleable software in the age of LLMs
Writing with AI
Writing with AI
iA writer's vision for using AI in writing process
Thinking in dialogue is easier and more entertaining than struggling with feelings, letters, grammar and style all by ourselves. Using AI as a writing dialogue partner, ChatGPT can become a catalyst for clarifying what we want to say. Even if it is wrong.6 Sometimes we need to hear what’s wrong to understand what’s right.
Seeing in clear text what is wrong or, at least, what we don’t mean can help us set our minds straight about what we really mean. If you get stuck, you can also simply let it ask you questions. If you don’t know how to improve, you can tell it to be evil in its critique of your writing
Just compare usage with AI to how we dealt with similar issues before AI. Discussing our writing with others is a general practice and regarded as universally helpful; honest writers honor and credit their discussion partners We already use spell checkers and grammar tools It’s common practice to use human editors for substantial or minor copy editing of our public writing Clearly, using dictionaries and thesauri to find the right expression is not a crime
Using AI in the editor replaces thinking. Using AI in dialogue increases thinking. Now, how can connect the editor and the chat window without making a mess? Is there a way to keep human and artificial text apart?
·ia.net·
Writing with AI
Online daters love to hate on Hinge. 10 years in, it’s more popular than ever.
Online daters love to hate on Hinge. 10 years in, it’s more popular than ever.
One key problem across the apps is the slog of self-presentation, or “impression management,” said Rachel Katz, a digital media sociologist who studies online dating at the University of Salford in the UK. “An important aspect of it is knowing your audience,” Katz said. On dating apps, you don’t know who exactly you’re presenting yourself to when picking a profile picture or composing your bio. You also don’t have physical cues that can help you adjust that self-presentation. “You’re trying to come up with something that’s generally appealing to people, but it can’t be too weird. It can’t be too unique,” said Bryce. “That’s partly why it’s exhausting,” Katz explains, “because it’s this constant labor. ... You’re not really sure of how to do it, you can’t just fit into a comfortable social role.”
When dating apps are not delivering on compatibility, Dean said, they are leading you to “believe that there’s a forever volume of people you can always like.”
Ury rejects the notion that apps should be asking people for more about themselves in writing or through extensive questionnaires. Users may match up on paper but end up disappointed in real life. “I would have rather that people understand that sooner by meeting up earlier,” she said. “Use the app as a matchmaker who gives you the matches — and then, as quickly as possible, the two of you should be chatting live to see if you are a match,” she said. “We found that three days of chatting is the sweet spot for scheduling a date.”
·vox.com·
Online daters love to hate on Hinge. 10 years in, it’s more popular than ever.
Quality software deserves your hard‑earned cash
Quality software deserves your hard‑earned cash
Quality software from independent makers is like quality food from the farmer’s market. A jar of handmade organic jam is not the same as mass-produced corn syrup-laden jam from the supermarket. Industrial fruit jam is filled with cheap ingredients and shelf stabilizers. Industrial software is filled with privacy-invasive trackers and proprietary formats. Google, Apple, and Microsoft make industrial software. Like industrial jam, industrial software has its benefits — it’s cheap, fairly reliable, widely available, and often gets the job done.
Big tech companies have the ability to make their software cheap by subsidizing costs in a variety of ways:
Google sells highly profitable advertising and makes its apps free, but you are subjected to ads and privacy-invasive tracking. Apple sells highly profitable devices and makes its apps free, but locks you into a proprietary ecosystem. Microsoft sells highly profitable enterprise contracts using a bundling strategy, and makes its apps cheap, also locking you into a proprietary ecosystem.
I’m not saying these companies are evil. But their subsidies create the illusion that all software should be cheap or free.
Independent makers of quality software go out of their way to make apps that are better for you. They take a principled approach to making tools that don’t compromise your privacy, and don’t lock you in. Independent software makers are people you can talk to. Like quality jam from the farmer’s market, you might become friends with the person who made it — they’ll listen to your suggestions and your complaints.
Big tech companies earn hundreds of billions of dollars and employ hundreds of thousands of people. When they make a new app, they can market it to their billions of customers easily. They have unbeatable leverage over the cost of developing and maintaining their apps.
·stephango.com·
Quality software deserves your hard‑earned cash