Found 4 bookmarks
Custom sorting
The Most ACCURATE ChatGPT Prompt Engineering Technique (new method) | Tree Of Thoughts
The Most ACCURATE ChatGPT Prompt Engineering Technique (new method) | Tree Of Thoughts
ChatGPT Prompt engineering. In this video, I got over the new Tree of Thoughts GPT prompt engineering technique. It has been shown to increase the accuracy of GPT results from 4% to 74%. I hope you enjoy the video. Tools used: ChatGPT Midjourney Canva Prompt: Three experts with exceptional logical thinking skills are collaboratively answering a question using a tree of thoughts method. Each expert will share their thought process in detail, taking into account the previous thoughts of others and admitting any errors. They will iteratively refine and expand upon each other's ideas, giving credit where it's due. The process continues until a conclusive answer is found. Organize the entire response in a markdown table format. The question is... If you enjoyed this, smash that like and subscribe button! Links: https://github.com/dave1010/tree-of-thought-prompting https://arxiv.org/abs/2305.10601 Timestamps: 0:00 Intro 0:30 Bonus 0:41 Tree Of Thoughts 1:03 Input Output 1:23 Chain-of-thought 2:00 TOT Explanation 2:45 Diagram explanation 5:00 GPT mistake 5:15 TOT Prompt 6:15 TOT results 7:00 How to learn anything 8:00 Outro #ai #chatgpt #chatgpt4 #openai #promptengineering #treeofthoughts
·youtube.com·
The Most ACCURATE ChatGPT Prompt Engineering Technique (new method) | Tree Of Thoughts
Stable Diffusion Is Getting Outrageously Good!
Stable Diffusion Is Getting Outrageously Good!
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers W&B+Stable Diffusion: https://wandb.ai/capecape/stable_diffusions/reports/Speed-Up-Stable-Diffusion-on-Your-M1Pro-Macbook-Pro--VmlldzoyNjY0ODYz 📝 The paper "High-Resolution Image Synthesis with Latent Diffusion Models" is available here: https://arxiv.org/abs/2112.10752 Try it: Web 1: https://huggingface.co/spaces/stabilityai/stable-diffusion Web 2: https://beta.dreamstudio.ai/generate Web 3 (also Stable Diffusion XL!): https://clipdrop.co/stable-diffusion Web 4 (notebooks): https://github.com/TheLastBen/fast-stable-diffusion Guide: https://stable-diffusion-art.com/know-these-important-parameters-for-stunning-ai-images/#Sampling_methods Draw Things app: https://drawthings.ai/ Stable Diffusion Web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui Photoshop integration: http://stable.art Sources: Video https://twitter.com/dreamwieber/status/1618453304970997762 Photorealistic image: https://twitter.com/DiffusionPics/status/1619444407937241089 Realistic vision: https://civitai.com/models/4201?modelVersionId=29461 Infinite zoom: https://twitter.com/hardmaru/status/1612134809924685825 Tiled texture: https://stackoverflow.com/questions/24319825/texture-tiling-with-continuous-random-offset Stable.art (Photoshop): https://github.com/isekaidev/stable.art Wand - drawing: https://twitter.com/wand_app/status/1604186054923210752 Texturing: https://twitter.com/CarsonKatri/status/1600248599254007810 + https://twitter.com/CarsonKatri/status/1603419328019169280 AR + assistant: https://twitter.com/StrangeNative/status/1569700294673702912 Metahumans: https://twitter.com/CoffeeVectors/status/1569416470332858372 My latest paper on simulations that look almost like reality is available for free here: https://rdcu.be/cWPfD Or this is the orig. Nature Physics link with clickable citations: https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
·youtube.com·
Stable Diffusion Is Getting Outrageously Good!
Enter PaLM 2: Full Breakdown (92 Pages Read + Gemini Before GPT 5?)
Enter PaLM 2: Full Breakdown (92 Pages Read + Gemini Before GPT 5?)
Google puts it foot on the accelerator, casting aside safety concerns to not only release a GPT 4 -competitive model, PaLM 2, but also announce that they are already training Gemini, a GPT 5 competitor [likely on TPU v5 chips]. This is truly a major day in AI history, and I try to cover it all. I'll show the benchmarks in which PaLM (which now powers Bard) beats GPT 4, and detail how they use SmartGPT-like techniques to boost performance. Crazily enough, PaLM 2 beats even Google Translate, due in large part to the text it was trained on. We'll talk coding in Bard, translation, MMLU, Big Bench, and much more. I'll end on the Universal Translator deepfakes and the underwhelming results from Sundar Pichai and Sam Altman's trip to the White House and what Hinton says about it all. On a more positive note, I cover Med PaLM 2, which could genuinely save thousands of lives. PaLM 2 Technical Report: https://ai.google/static/documents/palm2techreport.pdf Release Notes Google Blog: https://blog.google/technology/ai/google-palm-2-ai-large-language-model/ Bard Access: https://bard.google.com/ Scaling Transformer to 1M tokens: https://arxiv.org/pdf/2304.11062.pdf GPT 4 Technical Report: https://arxiv.org/pdf/2303.08774.pdf Bard Languages: https://support.google.com/bard/answer/13575153?hl=en Self Consistency Paper: https://arxiv.org/pdf/2203.11171.pdf Are Emergent Abilities a Mirage: https://arxiv.org/pdf/2304.15004.pdf Sparks of AGI Paper: https://arxiv.org/pdf/2303.12712.pdf Big Bench Hard: https://github.com/suzgunmirac/BIG-Bench-Hard Google Keynote: https://www.youtube.com/watch?v=cNfINi5CNbY Gemini: https://www.youtube.com/watch?v=1UvUjTaJRz0 Med PaLM 2: https://www.youtube.com/watch?v=k_-Z_TkHMqA TPU v5: https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html Hinton Warning: https://www.youtube.com/watch?v=FAbsoxQtUwM White House Readout: https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/ https://www.patreon.com/AIExplained
·youtube.com·
Enter PaLM 2: Full Breakdown (92 Pages Read + Gemini Before GPT 5?)
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)
#nerf #neuralrendering #deeplearning View Synthesis is a tricky problem, especially when only given a sparse set of images as an input. NeRF embeds an entire scene into the weights of a feedforward neural network, trained by backpropagation through a differential volume rendering procedure, and achieves state-of-the-art view synthesis. It includes directional dependence and is able to capture fine structural details, as well as reflection effects and transparency. OUTLINE: 0:00 - Intro & Overview 4:50 - View Synthesis Task Description 5:50 - The fundamental difference to classic Deep Learning 7:00 - NeRF Core Concept 15:30 - Training the NeRF from sparse views 20:50 - Radiance Field Volume Rendering 23:20 - Resulting View Dependence 24:00 - Positional Encoding 28:00 - Hierarchical Volume Sampling 30:15 - Experimental Results 33:30 - Comments & Conclusion Paper: https://arxiv.org/abs/2003.08934 Website & Code: https://www.matthewtancik.com/nerf My Video on SIREN: https://youtu.be/Q5g3p9Zwjrk Abstract: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons. Authors: Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/ BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
·youtu.be·
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)