ArtificialIntelligence

54 bookmarks
Custom sorting
解读 ChatGPT 背后的技术重点:RLHF、IFT、CoT、红蓝对抗
解读 ChatGPT 背后的技术重点:RLHF、IFT、CoT、红蓝对抗
近段时间,ChatGPT 横空出世并获得巨大成功,使得 RLHF、SFT、IFT、CoT 等这些晦涩的缩写开始出现在普罗大众的讨论中。这些晦涩的首字母缩略词究竟是什么意思?为什么它们如此重要?我们调查了相关的所有重要论文…
·zhuanlan.zhihu.com·
解读 ChatGPT 背后的技术重点:RLHF、IFT、CoT、红蓝对抗
LAION-AI/Open-Assistant: OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
LAION-AI/Open-Assistant: OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. - LAION-AI/Open-Assistant: OpenAssistant is a c...
·github.com·
LAION-AI/Open-Assistant: OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices. - mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone&am...
·github.com·
mlc-ai/mlc-llm: Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
HqWu-HITCS/Awesome-Chinese-LLM: 整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
HqWu-HITCS/Awesome-Chinese-LLM: 整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
·github.com·
HqWu-HITCS/Awesome-Chinese-LLM: 整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
Introducing Chat with Retrieval-Augmented Generation (RAG)
Introducing Chat with Retrieval-Augmented Generation (RAG)
We are excited to announce that our Chat API with RAG is now available in a public beta. With this new capability, developers can integrate user inputs, data sources, and model generations to build powerful product experiences and mitigate hallucinations by producing grounded and verifiable generations. The API is powered
·txt.cohere.com·
Introducing Chat with Retrieval-Augmented Generation (RAG)
百川大模型-汇聚世界知识 创作妙笔生花-百川智能
百川大模型-汇聚世界知识 创作妙笔生花-百川智能
百川智能以帮助大众轻松、普惠地获取世界知识和专业服务为使命,致力于通过语言AI的突破,构建中国最优秀的大模型底座。百川大模型,融合了意图理解、信息检索以及强化学习技术,结合有监督微调与人类意图对齐,在知识问答、文本创作领域表现突出。
·baichuan-ai.com·
百川大模型-汇聚世界知识 创作妙笔生花-百川智能
Papers with Code - Few-Shot Learning
Papers with Code - Few-Shot Learning
**Few-Shot Learning** is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation. span class="description-source"Source: [Penalty Method for Inversion-Free Deep Bilevel Optimization ](https://arxiv.org/abs/1911.03432)/span
·paperswithcode.com·
Papers with Code - Few-Shot Learning