(How might image generators solve p;problems?
Elsewhere people used to read a lot, courses were gut or food for thought. Now social media is prevalent, so how is that used?
Robots are still unnerving.
In any case, Godel remains, and cybernetics. Where is the determinism? Can any element remotely know another without the direct prompt? Or would that have to become a function of long-term evolution. In which case, what are the competing or environmental factors?
Thiis may become interesting when robots like Optimus are sent on long-range missions such as Mars without humans around and occasionally on comms. How can that be tested more locally ahead of time? Other than by starving them of resources.
Who picks the missions? Scary version.
Perhaps a Tyrell origin story. Old Norse or French.
Who picks Tyrell?
What is the implicit goal?
Incidentally, early on, Minsky reportedly favored tele-presence. Others later looked at expert systems. Assuming they are not hallucinating, machines are also capable of augmented or mixed reality. How do they know the difference? How do people, in either sense? Other than biases.
Or Space Force interns go DOGE versus Delphi mode.
Backtracking through the plan, from results to analysis or methods, how do they pick better questions or problem sets?
Does self-reporting evidence subjectivity?
Are there Platonic or universal prompts or are they personalized or localized? Can those be made equivalent through a dictionary?
Is this another form of mimicry where the collaborating or training partner or source is then removed? Expecting Enlightenment. At least pattern affinity.
Why AI?)
Planning Anything with Rigor: General-Purpose Zero-Shot Planning...
The Prompt Report: A Systematic Survey of Prompting Techniques
View PDF
Principled Instructions Are All You Need for Questioning...
Anil, C., Durmus, E., Sharma, M., Benton, J., Kundu, S., Batson, J., ... & Duvenaud, D. (2024). Many-shot Jailbreaking.
Long contexts represent a new front in the struggle to control LLMs. We explored a family of attacks that are newly feasible due to longer context lengths, as well as candidate mitigations. We found that the effectiveness of attacks, and of in-context learning more generally, could be characterized by simple power laws. This provides a richer source of feedback for mitigating long-context attacks than the standard approach of measuring frequency of success
Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding
Download PDF
Large Language Models as Optimizers
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis
Deliberate then Generate: Enhanced Prompting Framework for Text Generation
Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling
Automatic Prompt Optimization with "Gradient Descent" and Beam Search
Boosting Theory-of-Mind Performance in Large Language Models via Prompting
Learning to Compress Prompts with Gist Tokens
Instruction Tuning with GPT-4
Language Models can Solve Computer Tasks
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
A Good Prompt Is Worth Millions of Parameters? Low-resource...
The DALL·E 2 Prompt Book
HELP ME THINK: A Simple Prompting Strategy for Non-experts to...
Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints
Chat gpt robotics
Ask Me Anything: A simple strategy for prompting language models