Found 7 bookmarks
Newest
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege

An experienced college essay reviewer identifies seven distinct patterns that reveal ChatGPT's writing "fingerprint" in admission essays, demonstrating how AI-generated content, despite being well-written, often lacks originality and follows predictable patterns that make it detectable to experienced readers.

Seven key indicators of ChatGPT-written essays:

  1. Specific vocabulary choices (e.g., "delve," "tapestry")
  2. Limited types of extended metaphors (weaving, cooking, painting, dance, classical music)
  3. Distinctive punctuation patterns (em dashes, mixed apostrophe styles)
  4. Frequent use of tricolons (three-part phrases), especially ascending ones
  5. Common phrase pattern: "I learned that the true meaning of X is not only Y, it's also Z"
  6. Predictable future-looking conclusions: "As I progress... I will carry..."
  7. Multiple ending syndrome (similar to Lord of the Rings movies)
·reddit.com·
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege
Synthesizer for thought - thesephist.com
Synthesizer for thought - thesephist.com
Draws parallels between the evolution of music production through synthesizers and the potential for new tools in language and idea generation. The author argues that breakthroughs in mathematical understanding of media lead to new creative tools and interfaces, suggesting that recent advancements in language models could revolutionize how we interact with and manipulate ideas and text.
A synthesizer produces music very differently than an acoustic instrument. It produces music at the lowest level of abstraction, as mathematical models of sound waves.
Once we started understanding writing as a mathematical object, our vocabulary for talking about ideas expanded in depth and precision.
An idea is composed of concepts in a vector space of features, and a vector space is a kind of marvelous mathematical object that we can write theorems and prove things about and deeply and fundamentally understand.
Synthesizers enabled entirely new sounds and genres of music, like electronic pop and techno. These new sounds were easier to discover and share because new sounds didn’t require designing entirely new instruments. The synthesizer organizes the space of sound into a tangible human interface, and as we discover new sounds, we could share it with others as numbers and digital files, as the mathematical objects they’ve always been.
Because synthesizers are electronic, unlike traditional instruments, we can attach arbitrary human interfaces to it. This dramatically expands the design space of how humans can interact with music. Synthesizers can be connected to keyboards, sequencers, drum machines, touchscreens for continuous control, displays for visual feedback, and of course, software interfaces for automation and endlessly dynamic user interfaces. With this, we freed the production of music from any particular physical form.
Recently, we’ve seen neural networks learn detailed mathematical models of language that seem to make sense to humans. And with a breakthrough in mathematical understanding of a medium, come new tools that enable new creative forms and allow us to tackle new problems.
Heatmaps can be particularly useful for analyzing large corpora or very long documents, making it easier to pinpoint areas of interest or relevance at a glance.
If we apply the same idea to the experience of reading long-form writing, it may look like this. Imagine opening a story on your phone and swiping in from the scrollbar edge to reveal a vertical spectrogram, each “frequency” of the spectrogram representing the prominence of different concepts like sentiment or narrative tension varying over time. Scrubbing over a particular feature “column” could expand it to tell you what the feature is, and which part of the text that feature most correlates with.
What would a semantic diff view for text look like? Perhaps when I edit text, I’d be able to hover over a control for a particular style or concept feature like “Narrative voice” or “Figurative language”, and my highlighted passage would fan out the options like playing cards in a deck to reveal other “adjacent” sentences I could choose instead. Or, if that involves too much reading, each word could simply be highlighted to indicate whether that word would be more or less likely to appear in a sentence that was more “narrative” or more “figurative” — a kind of highlight-based indicator for the direction of a semantic edit.
Browsing through these icons felt as if we were inventing a new kind of word, or a new notation for visual concepts mediated by neural networks. This could allow us to communicate about abstract concepts and patterns found in the wild that may not correspond to any word in our dictionary today.
What visual and sensory tricks can we use to coax our visual-perceptual systems to understand and manipulate objects in higher dimensions? One way to solve this problem may involve inventing new notation, whether as literal iconic representations of visual ideas or as some more abstract system of symbols.
Photographers buy and sell filters, and cinematographers share and download LUTs to emulate specific color grading styles. If we squint, we can also imagine software developers and their package repositories like NPM to be something similar — a global, shared resource of abstractions anyone can download and incorporate into their work instantly. No such thing exists for thinking and writing. As we figure out ways to extract elements of writing style from language models, we may be able to build a similar kind of shared library for linguistic features anyone can download and apply to their thinking and writing. A catalogue of narrative voice, speaking tone, or flavor of figurative language sampled from the wild or hand-engineered from raw neural network features and shared for everyone else to use.
We’re starting to see something like this already. Today, when users interact with conversational language models like ChatGPT, they may instruct, “Explain this to me like Richard Feynman.” In that interaction, they’re invoking some style the model has learned during its training. Users today may share these prompts, which we can think of as “writing filters”, with their friends and coworkers. This kind of an interaction becomes much more powerful in the space of interpretable features, because features can be combined together much more cleanly than textual instructions in prompts.
·thesephist.com·
Synthesizer for thought - thesephist.com
Photoshop for text
Photoshop for text
In the near future, transforming text will become as commonplace as filtering images. A new set of tools is emerging, like Photoshop for text. Up until now, text editors have been focused on input. The next evolution of text editors will make it easy to alter, summarize and lengthen text. You’ll be able to do this for entire documents, not just individual sentences or paragraphs. The filters will be instantaneous and as good as if you wrote the text yourself. You will also be able to do this with local files, on your device, without relying on remote servers.
Initially, many of Photoshop’s capabilities were adaptations of analog effects. For example, “dodge” and “burn” are old darkroom techniques used to alter photographs. There are countless skeuomorphic names throughout digital image editing tools that refer to analog processes.
Text seems like it would be easier to manipulate than images. But languages have far more rules than images do. A reader expects writing to follow proper spelling and grammar, a consistent tone, and a logical sequence of sentences. Until now, solving this problem required building complex rule-based algorithms. Now we can solve this problem with AI models that can teach themselves to create readable text in any language.
·stephango.com·
Photoshop for text
ChatGPT Is a Blurry JPEG of the Web
ChatGPT Is a Blurry JPEG of the Web
This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone
When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them
they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic. The Web certainly contains explanations of carrying the “1,” but GPT-3 isn’t able to incorporate those explanations. GPT-3’s statistical analysis of examples of arithmetic enables it to produce a superficial approximation of the real thing, but no more than that.
In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression
Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.
Can large language models help humans with the creation of original writing? To answer that, we need to be specific about what we mean by that question. There is a genre of art known as Xerox art, or photocopy art, in which artists use the distinctive properties of photocopiers as creative tools. Something along those lines is surely possible with the photocopier that is ChatGPT, so, in that sense, the answer is yes
If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
Sometimes it’s only in the process of writing that you discover your original ideas.
Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
·newyorker.com·
ChatGPT Is a Blurry JPEG of the Web