Found 4 bookmarks
Newest
Dating someone with bad taste
Dating someone with bad taste
Marx’s definition captures that taste isn't just having an eye, ear, or sense for quality, it’s about having an accurate filter for the choices that are uniquely you. As he explains, “There are occasional sui generis taste geniuses, but most people with good taste…are very curious and studious people who have learned it over time.”
A better barometer of whether someone has authentically cultivated their own taste—or merely adopted what the algorithm feeds them—is their enthusiasm for sharing what they’re into and why. For instance, I have little personal interest in exploring TV or movies, which admittedly might be off-putting to some. However, the last guy I dated had what I consider to be great taste in this area. Unfamiliar picks from the 1970s through the ‘90s, international and domestic alike – I loved that he could open me up to this world. His world.
if shared tastes are sometimes important and sometimes not, how should we incorporate taste into our dating decisions? According to Dr. Akua Boateng, a licensed psychotherapist with an emphasis in individual and couples therapy, how you and your significant other blend your interests is the real indicator of compatibility. “It really goes back to people’s psychology or politics of difference,” Boateng says. If differences are the kindling for conflict rather than connection, compromise, and acceptance, it’s doomed from the start. “If you're coming from two different worlds, and the things that make you tick and find joy are diametrically opposed, you're going to have conflict in how you spend your time,” she says.
“From 2009 through 2014, it felt like people were bringing real life, morals, values and judgements to the internet, whereas now it feels like we’re bringing internet values and judgements to real life and trying to force them into how we move and interact…” says Mark Sabino, a product designer and cultural critic. The ease with which algorithms relentlessly serve up “content” has brought a societal shift toward liking or disliking things that are relatable rather than personal.
As we grow together within relationships, we’re continuously collecting new markers of taste to bring home to our person. It’s an exchange in perpetuity – memes, restaurants, recipes – whatever moves you to feel something, you’re likely sharing with your partner. As Portrait of a Lady director Céline Sciamma told The Independent, “A relationship is about inventing your own language. You’ve got the jokes, you’ve got the songs, you have this anecdote that’s going to make you laugh three years later. It’s this language that you build.”
As much as taste can be a connector and a litmus test, it’s unreliable as a fixed lens for selecting partners. Instead of evaluating every prospect based on how they match up “on paper” to your taste do’s and don’ts, both Marx and Boateng point out that taste is one of multiple characteristics that can influence the quality of relationships. But if you just can’t get over someone’s allegiance to Taylor Swift or Burning Man, Boateng says, “It could be a sign that how this person operates in the world is just not intriguing to [you]. It's not problematic or bad. It's just not uniquely intriguing to you.” And here, you should definitely trust your taste.
·app.myshelfy.xyz·
Dating someone with bad taste
Captain's log - the irreducible weirdness of prompting AIs
Captain's log - the irreducible weirdness of prompting AIs
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president's advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
There is no single magic word or phrase that works all the time, at least not yet. You may have heard about studies that suggest better outcomes from promising to tip the AI or telling it to take a deep breath or appealing to its “emotions” or being moderately polite but not groveling. And these approaches seem to help, but only occasionally, and only for some AIs.
The three most successful approaches to prompting are both useful and pretty easy to do. The first is simply adding context to a prompt. There are many ways to do that: give the AI a persona (you are a marketer), an audience (you are writing for high school students), an output format (give me a table in a word document), and more. The second approach is few shot, giving the AI a few examples to work from. LLMs work well when given samples of what you want, whether that is an example of good output or a grading rubric. The final tip is to use Chain of Thought, which seems to improve most LLM outputs. While the original meaning of the term is a bit more technical, a simplified version just asks the AI to go step-by-step through instructions: First, outline the results; then produce a draft; then revise the draft; finally, produced a polished output.
It is not uncommon to see good prompts make a task that was impossible for the LLM into one that is easy for it.
while we know that GPT-4 generates better ideas than most people, the ideas it comes up with seem relatively similar to each other. This hurts overall creativity because you want your ideas to be different from each other, not similar. Crazy ideas, good and bad, give you more of a chance of finding an unusual solution. But some initial studies of LLMs showed they were not good at generating varied ideas, at least compared to groups of humans.
People who use AI a lot are often able to glance at a prompt and tell you why it might succeed or fail. Like all forms of expertise, this comes with experience - usually at least 10 hours of work with a model.
There are still going to be situations where someone wants to write prompts that are used at scale, and, in those cases, structured prompting does matter. Yet we need to acknowledge that this sort of “prompt engineering” is far from an exact science, and not something that should necessarily be left to computer scientists and engineers. At its best, it often feels more like teaching or managing, applying general principles along with an intuition for other people, to coach the AI to do what you want. As I have written before, there is no instruction manual, but with good prompts, LLMs are often capable of far more than might be initially apparent.
·oneusefulthing.org·
Captain's log - the irreducible weirdness of prompting AIs
ChatGPT Is a Blurry JPEG of the Web
ChatGPT Is a Blurry JPEG of the Web
This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone
When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them
they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic. The Web certainly contains explanations of carrying the “1,” but GPT-3 isn’t able to incorporate those explanations. GPT-3’s statistical analysis of examples of arithmetic enables it to produce a superficial approximation of the real thing, but no more than that.
In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression
Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.
Can large language models help humans with the creation of original writing? To answer that, we need to be specific about what we mean by that question. There is a genre of art known as Xerox art, or photocopy art, in which artists use the distinctive properties of photocopiers as creative tools. Something along those lines is surely possible with the photocopier that is ChatGPT, so, in that sense, the answer is yes
If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
Sometimes it’s only in the process of writing that you discover your original ideas.
Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
·newyorker.com·
ChatGPT Is a Blurry JPEG of the Web
The Age of Algorithmic Anxiety
The Age of Algorithmic Anxiety
“I’ve been on the internet for the last 10 years and I don’t know if I like what I like or what an algorithm wants me to like,” Peter wrote. She’d come to see social networks’ algorithmic recommendations as a kind of psychic intrusion, surreptitiously reshaping what she’s shown online and, thus, her understanding of her own inclinations and tastes.
Besieged by automated recommendations, we are left to guess exactly how they are influencing us, feeling in some moments misperceived or misled and in other moments clocked with eerie precision.
·newyorker.com·
The Age of Algorithmic Anxiety