Found 4 bookmarks
Newest
Stable Diffusion Is Getting Outrageously Good!
Stable Diffusion Is Getting Outrageously Good!
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers W&B+Stable Diffusion: https://wandb.ai/capecape/stable_diffusions/reports/Speed-Up-Stable-Diffusion-on-Your-M1Pro-Macbook-Pro--VmlldzoyNjY0ODYz 📝 The paper "High-Resolution Image Synthesis with Latent Diffusion Models" is available here: https://arxiv.org/abs/2112.10752 Try it: Web 1: https://huggingface.co/spaces/stabilityai/stable-diffusion Web 2: https://beta.dreamstudio.ai/generate Web 3 (also Stable Diffusion XL!): https://clipdrop.co/stable-diffusion Web 4 (notebooks): https://github.com/TheLastBen/fast-stable-diffusion Guide: https://stable-diffusion-art.com/know-these-important-parameters-for-stunning-ai-images/#Sampling_methods Draw Things app: https://drawthings.ai/ Stable Diffusion Web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui Photoshop integration: http://stable.art Sources: Video https://twitter.com/dreamwieber/status/1618453304970997762 Photorealistic image: https://twitter.com/DiffusionPics/status/1619444407937241089 Realistic vision: https://civitai.com/models/4201?modelVersionId=29461 Infinite zoom: https://twitter.com/hardmaru/status/1612134809924685825 Tiled texture: https://stackoverflow.com/questions/24319825/texture-tiling-with-continuous-random-offset Stable.art (Photoshop): https://github.com/isekaidev/stable.art Wand - drawing: https://twitter.com/wand_app/status/1604186054923210752 Texturing: https://twitter.com/CarsonKatri/status/1600248599254007810 + https://twitter.com/CarsonKatri/status/1603419328019169280 AR + assistant: https://twitter.com/StrangeNative/status/1569700294673702912 Metahumans: https://twitter.com/CoffeeVectors/status/1569416470332858372 My latest paper on simulations that look almost like reality is available for free here: https://rdcu.be/cWPfD Or this is the orig. Nature Physics link with clickable citations: https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
·youtube.com·
Stable Diffusion Is Getting Outrageously Good!
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Transformer-based language models (LMs) are known to capture factual knowledge in their parameters. While previous work looked into where factual associations are stored, only little is known about how they are retrieved internally during inference. We investigate this question through the lens of information flow. Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute. With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions. Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction. First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation "queries" the enriched subject to extract the attribute. Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters. Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.
·arxiv.org·
Dissecting Recall of Factual Associations in Auto-Regressive Language Models