For a long time, each ML model operated in one data mode – text (translation, language modeling), image (object detection, image classification), or audio (speech recognition).
Signs of undeclared ChatGPT use in papers mounting
Guillaume Cabanac Last week, an environmental journal published a paper on the use of renewable energy in cleaning up contaminated land. To read it, you would have to pay 40 euros. But you still wo…
GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization
Finetune llama2-70b and codellama on MacBook Air without quantization - GitHub - okuvshynov/slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization
If Communication is Oxygen, and Prompt hacking is essentially communication, then… 1: Communication is Oxygen At Automattic, we like to say that communication is the oxygen of a distributed company. Why is that exactly? For remote work to work, you have to provide sufficient context so your coworkers are on the same page. They cannotContinue reading "Prompt hacking is Oxygen"
Making or using generative 'AI' is, all else being equal, a dick move
To be clear right from the outset: if you have to use generative models to keep your job or otherwise have no choice in the matter, then obviously you aren’t being a dick.
Zapier launches Canvas, an AI-powered flowchart tool | TechCrunch
Zapier today announced the launch of Canvas, a new tool that aims to help its users plan and diagram their business-critical processes -- with a fair bit
Web Scraping using ChatGPT - Complete Guide with Examples | ProxiesAPI
Web scraping using ChatGPT: extract data from websites using code. ChatGPT is a powerful tool for web scraping. Techniques include using Selenium and Beautiful Soup. Get started now!
What is LoRA? Low-Rank Adaptation for finetuning LLMs EXPLAINED
How does LoRA work? Low-Rank Adaptation for Parameter-Efficient LLM Finetuning explained. Works for any other neural network as well, not just for LLMs.
📜 „Lora: Low-rank adaptation of large language models“ Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L. and Chen, W., 2021. https://arxiv.org/abs/2106.09685
📚 https://sebastianraschka.com/blog/2023/llm-finetuning-lora.html
📽️ LoRA implementation: https://youtu.be/iYr1xZn26R8
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Dres. Trost GbR, Siltax, Vignesh Valliappan, Mutual Information, Kshitij
Outline:
00:00 LoRA explained
00:59 Why finetuning LLMs is costly
01:44 How LoRA works
03:45 Low-rank adaptation
06:14 LoRA vs other approaches
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon: https://www.patreon.com/AICoffeeBreak
Ko-fi: https://ko-fi.com/aicoffeebreak
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔗 Links:
AICoffeeBreakQuiz: https://www.youtube.com/c/AICoffeeBreak/community
Twitter: https://twitter.com/AICoffeeBreak
Reddit: https://www.reddit.com/r/AICoffeeBreak/
YouTube: https://www.youtube.com/AICoffeeBreak
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research
Music 🎵 : Meadows - Ramzoid
Video editing: Nils Trost