Found 118 bookmarks
Custom sorting
The Model That Changes Everything: Alpaca Breakthrough (ft. Apple's LLM, BritGPT, Ernie and AlexaTM)
The Model That Changes Everything: Alpaca Breakthrough (ft. Apple's LLM, BritGPT, Ernie and AlexaTM)
8 years of cost reduction in 5 weeks: how Stanford's Alpaca model changes everything, including the economics of OpenAI and GPT 4. The breakthrough, using self-instruct, has big implications for Apple's secret large language model, Baidu's ErnieBot, Amazon's attempts and even governmental efforts, like the newly announced BritGPT. I will go through how Stanford put the model together, why it costs so little, and demonstrate in action versus Chatgpt and GPT 4. And what are the implications of short-circuiting human annotation like this? With analysis of a tweet by Eliezer Yudkowsky, I delve into the workings of the model and the questions it rises. Web Demo: https://alpaca-ai0.ngrok.io/ Alpaca: https://crfm.stanford.edu/2023/03/13/alpaca.html Ark Forecast: https://research.ark-invest.com/hubfs/1_Download_Files_ARK-Invest/Big_Ideas/ARK%20Invest_013123_Presentation_Big%20Ideas%202023_Final.pdf Eliezer Tweet: https://twitter.com/ESYudkowsky/status/1635577836525469697 https://twitter.com/ESYudkowsky/status/1635667349792780288 Self-Instruct: https://arxiv.org/pdf/2212.10560.pdf InstructGPT: https://openai.com/research/instruction-following OpenAI Terms: https://openai.com/policies/terms-of-use MMLU Test: https://arxiv.org/pdf/2009.03300.pdf Apple LLM: https://www.nytimes.com/2023/03/15/technology/siri-alexa-google-assistant-artificial-intelligence.html GPT 4 API: https://openai.com/pricing Llama Models: https://arxiv.org/pdf/2302.13971.pdf BritGPT: https://www.theguardian.com/technology/2023/mar/15/uk-to-invest-900m-in-supercomputer-in-bid-to-build-own-britgpt Amazon: https://www.businessinsider.com/amazons-ceo-andy-jassy-on-chat-cpt-ai-2023-2?r=US&IR=T AlexaTM: https://arxiv.org/pdf/2208.01448.pdf Baidu Ernie: https://www.nytimes.com/2023/03/16/world/asia/china-baidu-chatgpt-ernie.html PaLM API: https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.html https://www.patreon.com/AIExplained
·youtube.com·
The Model That Changes Everything: Alpaca Breakthrough (ft. Apple's LLM, BritGPT, Ernie and AlexaTM)
Introducing LLaMA: A foundational, 65-billion-parameter language model
Introducing LLaMA: A foundational, 65-billion-parameter language model
Today, we’re releasing our LLaMA (Large Language Model Meta AI) foundational model with a gated release. LLaMA is more efficient and competitive with previously published models of a similar size on existing benchmarks.
·ai.facebook.com·
Introducing LLaMA: A foundational, 65-billion-parameter language model
Mesa-Optimization - AI Alignment Forum
Mesa-Optimization - AI Alignment Forum
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. In this situation, a base optimizer creates a second optimizer, called a mesa-optimizer. The primary reference work for this concept is Hubinger et al.'s "Risks from Learned Optimization in Advanced Machine Learning Systems". Example: Natural selection is an optimization process that optimizes for reproductive fitness. Natural selection produced humans, who are themselves optimizers. Humans are therefore mesa-optimizers of natural selection. In the context of AI alignment, the concern is that a base optimizer (e.g., a gradient descent process) may produce a learned model that is itself an optimizer, and that has unexpected and undesirable properties. Even if the gradient descent process is in some sense "trying" to do exactly what human developers want, the resultant mesa-optimizer will not typically be trying to do the exact same thing.[1]   HISTORY Previously work under this concept was called Inner Optimizer or Optimization Daemons. Wei Dai brings up a similar idea in an SL4 thread.[2] The optimization daemons article on Arbital was published probably in 2016.[1] Jessica Taylor wrote two posts about daemons while at MIRI: * "Are daemons a problem for ideal agents?" (2017-02-11) * "Maximally efficient agents will probably have an anti-daemon immune system" (2017-02-23)   SEE ALSO * Inner Alignment * Complexity of value * Thou Art Godshatter EXTERNAL LINKS Video by Robert Miles Some posts that reference optimization daemons: * "Cause prioritization for downside-focused value systems": "Alternatively, perhaps goal preservation becomes more difficult the more capable AI systems become, in which case the future might be controlled by unstable goal functions taking turns over the steering wheel" * "Techniques for optimizing worst-case performance": "The difficulty of optimizing worst-case performance is one of the most likely re
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. In this situation, a base optimizer creates a second optimizer, called a mesa-optimizer. The primary reference work for this concept is Hubinger et al.'s "Risks from Learned Optimization in Advanced Machine Learning Systems".
·alignmentforum.org·
Mesa-Optimization - AI Alignment Forum
Question Answering - OpenAI | Weaviate - vector search engine
Question Answering - OpenAI | Weaviate - vector search engine
In short
First it performs a semantic search with k=1 to find the document (e.g. a Sentence, Paragraph, Article, etc.) which is most likely to contain the answer. This step has no certainty threshold and as long as at least one document is present, it will be fetched and selected as the one most likely containing the answer. In a second step, Weaviate creates the required prompt as an input to an external call made to the OpenAI Completions endpoint. Weaviate uses the most relevant documents to establish a prompt for which OpenAI extracts the answer
·weaviate.io·
Question Answering - OpenAI | Weaviate - vector search engine
New AI classifier for indicating AI-written text
New AI classifier for indicating AI-written text
We’re launching a classifier trained to distinguish between AI-written and human-written text. We’ve trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers. While it is impossible to reliably detect all AI-written text, we believe good classifiers
·openai.com·
New AI classifier for indicating AI-written text
Secure multi-party computation - Wikipedia
Secure multi-party computation - Wikipedia
Secure multi-party computation (also known as secure computation, multi-party computation (MPC) or privacy-preserving computation) is a subfield of cryptography with the goal of creating methods for parties to jointly compute a function over their inputs while keeping those inputs private. Unlike traditional cryptographic tasks, where cryptography assures security and integrity of communication or storage and the adversary is outside the system of participants (an eavesdropper on the sender and receiver), the cryptography in this model protects participants' privacy from each other.
·en.wikipedia.org·
Secure multi-party computation - Wikipedia
Training Your Own Dense Passage Retrieval Model | Haystack
Training Your Own Dense Passage Retrieval Model | Haystack
Learn about training a Dense Passage Retrieval model and the data needed to do so.
DPR is standardly trained using a method known as in-batch negatives. This means that positive contexts for a given query are treated as negative contexts for the other queries in the batch. Doing so allows for a high degree of computational efficiency, thus allowing the model to be trained on large amounts of data.
·haystack.deepset.ai·
Training Your Own Dense Passage Retrieval Model | Haystack