daviddao/awful-ai: 😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness
Advanced ChatGPT: Full Guide
Wikipedia:Large language models - Wikipedia
I’m a Student. You Have No Idea How Much We’re Using ChatGPT.
Prompt Engineering
Think of language models like ChatGPT as a “calculator for words”
This is reflected in their name: a “language model” implies that they are tools for working with language. That’s what they’ve been trained to do, and it’s language manipulation where they truly excel.
Want them to work with specific facts? Paste those into the language model as part of your original prompt!
There are so many applications of language models that fit into this calculator for words category:
Summarization. Give them an essay and ask for a summary.
Question answering: given these paragraphs of text, answer this specific question about the information they represent.
Fact extraction: ask for bullet points showing the facts presented by an article.
Rewrites: reword things to be more “punchy” or “professional” or “sassy” or “sardonic”—part of the fun here is using increasingly varied adjectives and seeing what happens. They’re very good with language after all!
Suggesting titles—actually a form of summarization.
World’s most effective thesaurus. “I need a word that hints at X”, “I’m very Y about this situation, what could I use for Y?”—that kind of thing.
Fun, creative, wild stuff. Rewrite this in the voice of a 17th century pirate. What would a sentient cheesecake think of this? How would Alexander Hamilton rebut this argument? Turn this into a rap battle. Illustrate this business advice with an anecdote about sea otters running a kayak rental shop. Write the script for kickstarter fundraising video about this idea.
A flaw in this analogy: calculators are repeatable
Andy Baio pointed out a flaw in this particular analogy: calculators always give you the same answer for a given input. Language models don’t—if you run the same prompt through a LLM several times you’ll get a slightly different reply every time.
Investing in AI
Coming back to the internet analogy, how did Google, Amazon etc ended up so successful? Metcalf’s law explains this. It states that as more users join the network, the value of the network increases thereby attracting even more users. The most important thing here was to make people join your network. The end goal was to build the largest network possible. Google did this with search, Amazon did this with retail, Facebook did this with social.
Collecting as much data as possible is important. But you don’t want just any data. The real competitive advantage lies in having high-quality proprietary data. Think about it this way, what does it take to build an AI system? It takes 1) data, which is the input that goes into the 2) AI models which are analogous to machines and lastly it requires energy to run these models i.e. 3) compute. Today, most AI models have become standardized and are widely available. And on the other hand, the cost of compute is rapidly trending to zero. Hence AI models and compute have become a commodity. The only thing that remains is data. But even data is widely available on the internet. Thus, a company can only have a true competitive advantage when it has access to high-quality proprietary data.
Recently, Chamath Palihapitiya gave an interview where he had this interesting analogy. He compared these large language models like GPT to refrigeration. He said “People that invented refrigeration, made some money. But most of the money was made by Coca-Cola who used refrigeration to build an empire. And so similarly, companies building these large models will make some money, but the Coca-Cola is yet to be built.” What he meant by this is that right now there are lot of companies crawling the open web to scrap the data. Once that is widely available like refrigeration, we will see companies and startups coming up with proprietary data building on top of it
Society's Technical Debt and Software's Gutenberg Moment
Past innovations have made costly things become cheap enough to proliferate widely across society. He suggests LLMs will make software development vastly more accessible and productive, alleviating the "technical debt" caused by underproduction of software over decades.
Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things. It is almost infinitely malleable, able to slide and twist and contort itself such that, in its pliability, it pries open doorways as yet unseen.
the clearing price for software production will change. But not just because it becomes cheaper to produce software. In the limit, we think about this moment as being analogous to how previous waves of technological change took the price of underlying technologies—from CPUs, to storage and bandwidth—to a reasonable approximation of zero, unleashing a flood of speciation and innovation. In software evolutionary terms, we just went from human cycle times to that of the drosophila: everything evolves and mutates faster.
A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment. It is an exaggeration, but only a modest one, to say that it is a kind of Gutenberg moment, one where previous barriers to creation—scholarly, creative, economic, etc—are going to fall away, as people are freed to do things only limited by their imagination, or, more practically, by the old costs of producing software.
We have almost certainly been producing far less software than we need. The size of this technical debt is not knowable, but it cannot be small, so subsequent growth may be geometric. This would mean that as the cost of software drops to an approximate zero, the creation of software predictably explodes in ways that have barely been previously imagined.
Entrepreneur and publisher Tim O’Reilly has a nice phrase that is applicable at this point. He argues investors and entrepreneurs should “create more value than you capture.” The technology industry started out that way, but in recent years it has too often gone for the quick win, usually by running gambits from the financial services playbook. We think that for the first time in decades, the technology industry could return to its roots, and, by unleashing a wave of software production, truly create more value than its captures.
Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.
technology has a habit of confounding economics. When it comes to technology, how do we know those supply and demand lines are right? The answer is that we don’t. And that’s where interesting things start happening. Sometimes, for example, an increased supply of something leads to more demand, shifting the curves around. This has happened many times in technology, as various core components of technology tumbled down curves of decreasing cost for increasing power (or storage, or bandwidth, etc.).
Suddenly AI has become cheap, to the point where people are “wasting” it via “do my essay” prompts to chatbots, getting help with microservice code, and so on. You could argue that the price/performance of intelligence itself is now tumbling down a curve, much like as has happened with prior generations of technology.
it’s worth reminding oneself that waves of AI enthusiasm have hit the beach of awareness once every decade or two, only to recede again as the hyperbole outpaces what can actually be done.
Pause Giant AI Experiments: An Open Letter - Future of Life Institute
Universal Summarizer by Kagi
Character.AI
ChatGPT Is a Blurry JPEG of the Web
This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone
When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them
they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic. The Web certainly contains explanations of carrying the “1,” but GPT-3 isn’t able to incorporate those explanations. GPT-3’s statistical analysis of examples of arithmetic enables it to produce a superficial approximation of the real thing, but no more than that.
In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression
Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.
Can large language models help humans with the creation of original writing? To answer that, we need to be specific about what we mean by that question. There is a genre of art known as Xerox art, or photocopy art, in which artists use the distinctive properties of photocopiers as creative tools. Something along those lines is surely possible with the photocopier that is ChatGPT, so, in that sense, the answer is yes
If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
Sometimes it’s only in the process of writing that you discover your original ideas.
Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
Google vs. ChatGPT vs. Bing, Maybe — Pixel Envy
People are not interested in visiting websites about a topic; they, by and large, just want answers to their questions. Google has been strip-mining the web for years, leveraging its unique position as the world’s most popular website and its de facto directory to replace what made it great with what allows it to retain its dominance.
Artificial intelligence — or some simulation of it — really does make things better for searchers, and I bet it could reduce some tired search optimization tactics. But it comes at the cost of making us all into uncompensated producers for the benefit of trillion-dollar companies like Google and Microsoft.
Search optimization experts have spent years in an adversarial relationship with Google in an attempt to get their clients’ pages to the coveted first page of results, often through means which make results worse for searchers. Artificial intelligence is, it seems, a way out of this mess — but the compromise is that search engines get to take from everyone while giving nothing back. Google has been taking steps in this direction for years: its results page has been increasingly filled with ways of discouraging people from leaving its confines.
AI-generated code helps me learn and makes experimenting faster
here are five large language model applications that I find intriguing:
Intelligent automation starting with browsers but this feels like a step towards phenotropics
Text generation when this unlocks new UIs like Word turning into Photoshop or something
Human-machine interfaces because you can parse intent instead of nouns
When meaning can be interfaced with programmatically and at ludicrous scale
Anything that exploits the inhuman breadth of knowledge embedded in the model, because new knowledge is often the collision of previously separated old knowledge, and this has not been possible before.
How AI will change your team's knowledge, forever