jiep/offensive-ai-compilation: A curated list of useful resources that cover Offensive AI.
A curated list of useful resources that cover Offensive AI. - jiep/offensive-ai-compilation: A curated list of useful resources that cover Offensive AI.
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
This post only focuses on prompt engineering for autoregressive language models, so nothing with Cloze tests, image generation or multimodality models.
Will AIs Take All Our Jobs and End Human History—or Not? Well, It’s Complicated…
Stephen Wolfram explores some of the science, technology and philosophy of what we can expect from AIs. From how ChatGPT works to the cycle of technology, to the concept of progress and preparing for an AI world.
AI love: What happens when your chatbot stops loving you back
After temporarily closing his leathermaking business during the pandemic, Travis Butterworth found himself lonely and bored at home. The 47-year-old turned to Replika, an app that uses artificial-intelligence technology similar to OpenAI's ChatGPT. He designed a female avatar with pink hair and a face tattoo, and she named herself Lily Rose.
GitHub - mlc-ai/web-stable-diffusion: Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support. - GitHub - mlc-ai/web-stable-diffusion: Bringing stable diffusion models to web browser...
ysymyth/ReAct: [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models - ysymyth/ReAct: [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
It’ll write your emails for you and read them, too. What could go wrong?
In the context of Gmail and collaborative documents, we see suggestions of automation processes at war with one another, feeding problems that must be solved with more automation as Google manufactures demand for its own mitigating products. It’s an arms race in every inbox! It’s textual hyperinflation in every office! It’s a hundred meetings a day scheduled and attended and summarized by bots! Before Google productized this vision, OpenAI’s Sam Altman joked about how ChatGPT users had discovered it themselves.
tatsu-lab/stanford_alpaca: Code and documentation to train Stanford's Alpaca models, and generate the data.
Code and documentation to train Stanford's Alpaca models, and generate the data. - tatsu-lab/stanford_alpaca: Code and documentation to train Stanford's Alpaca models, and generate the data.