Found 187 bookmarks
Newest
AutoML-GPT: Automatic Machine Learning with GPT
AutoML-GPT: Automatic Machine Learning with GPT
AI tasks encompass a wide range of domains and fields. While numerous AI models have been designed for specific tasks and applications, they often require considerable human efforts in finding the right model architecture, optimization algorithm, and hyperparameters. Recent advances in large language models (LLMs) like ChatGPT show remarkable capabilities in various aspects of reasoning, comprehension, and interaction. Consequently, we propose developing task-oriented prompts and automatically utilizing LLMs to automate the training pipeline. To implement this concept, we present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyperparameters. AutoML-GPT dynamically takes user requests from the model and data cards and composes the corresponding prompt paragraph. Ultimately, with this prompt paragraph, AutoML-GPT will automatically conduct the experiments from data processing to model architecture, hyperparameter tuning, and predicted training log. By leveraging {\ours}'s robust language capabilities and the available AI models, AutoML-GPT can tackle numerous intricate AI tasks across various tasks and datasets. This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many AI tasks.
·arxiv.org·
AutoML-GPT: Automatic Machine Learning with GPT
Stable Diffusion Is Getting Outrageously Good!
Stable Diffusion Is Getting Outrageously Good!
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers W&B+Stable Diffusion: https://wandb.ai/capecape/stable_diffusions/reports/Speed-Up-Stable-Diffusion-on-Your-M1Pro-Macbook-Pro--VmlldzoyNjY0ODYz 📝 The paper "High-Resolution Image Synthesis with Latent Diffusion Models" is available here: https://arxiv.org/abs/2112.10752 Try it: Web 1: https://huggingface.co/spaces/stabilityai/stable-diffusion Web 2: https://beta.dreamstudio.ai/generate Web 3 (also Stable Diffusion XL!): https://clipdrop.co/stable-diffusion Web 4 (notebooks): https://github.com/TheLastBen/fast-stable-diffusion Guide: https://stable-diffusion-art.com/know-these-important-parameters-for-stunning-ai-images/#Sampling_methods Draw Things app: https://drawthings.ai/ Stable Diffusion Web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui Photoshop integration: http://stable.art Sources: Video https://twitter.com/dreamwieber/status/1618453304970997762 Photorealistic image: https://twitter.com/DiffusionPics/status/1619444407937241089 Realistic vision: https://civitai.com/models/4201?modelVersionId=29461 Infinite zoom: https://twitter.com/hardmaru/status/1612134809924685825 Tiled texture: https://stackoverflow.com/questions/24319825/texture-tiling-with-continuous-random-offset Stable.art (Photoshop): https://github.com/isekaidev/stable.art Wand - drawing: https://twitter.com/wand_app/status/1604186054923210752 Texturing: https://twitter.com/CarsonKatri/status/1600248599254007810 + https://twitter.com/CarsonKatri/status/1603419328019169280 AR + assistant: https://twitter.com/StrangeNative/status/1569700294673702912 Metahumans: https://twitter.com/CoffeeVectors/status/1569416470332858372 My latest paper on simulations that look almost like reality is available for free here: https://rdcu.be/cWPfD Or this is the orig. Nature Physics link with clickable citations: https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
·youtube.com·
Stable Diffusion Is Getting Outrageously Good!
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Transformer-based language models (LMs) are known to capture factual knowledge in their parameters. While previous work looked into where factual associations are stored, only little is known about how they are retrieved internally during inference. We investigate this question through the lens of information flow. Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute. With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions. Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction. First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation "queries" the enriched subject to extract the attribute. Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters. Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.
·arxiv.org·
Dissecting Recall of Factual Associations in Auto-Regressive Language Models
SamurAI: A Versatile IoT Node With Event-Driven Wake-Up and Embedded ML Acceleration
SamurAI: A Versatile IoT Node With Event-Driven Wake-Up and Embedded ML Acceleration
Increased capabilities such as recognition and self-adaptability are now required from IoT applications. While IoT node power consumption is a major concern for these applications, cloud-based processing is becoming unsustainable due to continuous sensor or image data transmission over the wireless network. Thus optimized ML capabilities and data transfers should be integrated in the IoT node. Moreover, IoT applications are torn between sporadic data-logging and energy-hungry data processing (e.g. image classification). Thus, the versatility of the node is key in addressing this wide diversity of energy and processing needs. This paper presents SamurAI, a versatile IoT node bridging this gap in processing and in energy by leveraging two on-chip sub-systems: a low power, clock-less, event-driven Always-Responsive (AR) part and an energy-efficient On-Demand (OD) part. AR contains a 1.7MOPS event-driven, asynchronous Wake-up Controller (WuC) with a 207ns wake-up time optimized for sporadic computing, while OD combines a deep-sleep RISC-V CPU and 1.3TOPS/W Machine Learning (ML) for more complex tasks up to 36GOPS. This architecture partitioning achieves best in class versatility metrics such as peak performance to idle power ratio. On an applicative classification scenario, it demonstrates system power gains, up to 3.5x compared to cloud-based processing, and thus extended battery lifetime.
·arxiv.org·
SamurAI: A Versatile IoT Node With Event-Driven Wake-Up and Embedded ML Acceleration
We're Afraid Language Models Aren't Modeling Ambiguity
We're Afraid Language Models Aren't Modeling Ambiguity
Ambiguity is an intrinsic feature of natural language. Managing ambiguity is a key part of human language understanding, allowing us to anticipate misunderstanding as communicators and revise our interpretations as listeners. As language models (LMs) are increasingly employed as dialogue interfaces and writing aids, handling ambiguous language is critical to their success. We characterize ambiguity in a sentence by its effect on entailment relations with another sentence, and collect AmbiEnt, a linguist-annotated benchmark of 1,645 examples with diverse kinds of ambiguity. We design a suite of tests based on AmbiEnt, presenting the first evaluation of pretrained LMs to recognize ambiguity and disentangle possible meanings. We find that the task remains extremely challenging, including for the recent GPT-4, whose generated disambiguations are considered correct only 32% of the time in human evaluation, compared to 90% for disambiguations in our dataset. Finally, to illustrate the value of ambiguity-sensitive tools, we show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity. We encourage the field to rediscover the importance of ambiguity for NLP.
·arxiv.org·
We're Afraid Language Models Aren't Modeling Ambiguity
The Design Space of Generative Models
The Design Space of Generative Models
Card et al.'s classic paper "The Design Space of Input Devices" established the value of design spaces as a tool for HCI analysis and invention. We posit that developing design spaces for emerging pre-trained, generative AI models is necessary for supporting their integration into human-centered systems and practices. We explore what it means to develop an AI model design space by proposing two design spaces relating to generative AI models: the first considers how HCI can impact generative models (i.e., interfaces for models) and the second considers how generative models can impact HCI (i.e., models as an HCI prototyping material).
·arxiv.org·
The Design Space of Generative Models
X-Risk Analysis for AI Research
X-Risk Analysis for AI Research
Artificial intelligence (AI) has the potential to greatly improve society, but as with any powerful technology, it comes with heightened risks and responsibilities. Current AI research lacks a systematic discussion of how to manage long-tail risks from AI systems, including speculative long-term risks. Keeping in mind the potential benefits of AI, there is some concern that building ever more intelligent and powerful AI systems could eventually result in systems that are more powerful than us; some say this is like playing with fire and speculate that this could create existential risks (x-risks). To add precision and ground these discussions, we provide a guide for how to analyze AI x-risk, which consists of three parts: First, we review how systems can be made safer today, drawing on time-tested concepts from hazard analysis and systems safety that have been designed to steer large processes in safer directions. Next, we discuss strategies for having long-term impacts on the safety of future systems. Finally, we discuss a crucial concept in making AI systems safer by improving the balance between safety and general capabilities. We hope this document and the presented concepts and tools serve as a useful guide for understanding how to analyze AI x-risk.
·arxiv.org·
X-Risk Analysis for AI Research
Dan Hendrycks on Twitter
Dan Hendrycks on Twitter
“More and more researchers think that building AIs smarter than us could pose existential risks. But what might these risks look like, and how can we manage them? We provide a guide to help analyze how research can reduce these risks. Paper: https://t.co/SHCwaClRHA (🧵below)”
·twitter.com·
Dan Hendrycks on Twitter
Advanced Biophysical Model to Capture Channel Variability for EQS Capacitive HBC
Advanced Biophysical Model to Capture Channel Variability for EQS Capacitive HBC
Human Body Communication (HBC) has come up as a promising alternative to traditional radio frequency (RF) Wireless Body Area Network (WBAN) technologies. This is essentially due to HBC providing a broadband communication channel with enhanced signal security in the physical layer due to lower radiation from the human body as compared to its RF counterparts. An in-depth understanding of the mechanism for the channel loss variability and associated biophysical model needs to be developed before EQS-HBC can be used more frequently in WBAN consumer and medical applications. Biophysical models characterizing the human body as a communication channel didn't exist in literature for a long time. Recent developments have shown models that capture the channel response for fixed transmitter and receiver positions on the human body. These biophysical models do not capture the variability in the HBC channel for varying positions of the devices with respect to the human body. In this study, we provide a detailed analysis of the change in path loss in a capacitive-HBC channel in the electroquasistatic (EQS) domain. Causes of channel loss variability namely: inter-device coupling and effects of fringe fields due to body's shadowing effects are investigated. FEM based simulation results are used to analyze the channel response of human body for different positions and sizes of the device which are further verified using measurement results to validate the developed biophysical model. Using the bio-physical model, we develop a closed form equation for the path loss in a capacitive HBC channel which is then analyzed as a function of the geometric properties of the device and the position with respect to the human body which will help pave the path towards future EQSHBC WBAN design.
·arxiv.org·
Advanced Biophysical Model to Capture Channel Variability for EQS Capacitive HBC
Deep Learning in Music Recommendation Systems
Deep Learning in Music Recommendation Systems
Like in many other research areas, deep learning (DL) is increasingly adopted in music recommendation systems (MRS). Deep neural networks are used in this domain particularly for extracting latent factors of music items from audio signals or metadata and for learning sequential patterns of music items (tracks or artists) from music playlists or listening sessions. Latent item factors are commonly integrated into content-based filtering and hybrid MRS, whereas sequence models of music items are used for sequential music recommendation, e.g., automatic playlist continuation. This review article explains particularities of the music domain in RS research. It gives an overview of the state of the art that employs deep learning for music recommendation. The discussion is structured according to the dimensions of neural network type, input data, recommendation approach (content-based filtering, collaborative filtering, or both), and task (standard or sequential music recommendation). In addition, we discuss major challenges faced in MRS, in particular in the context of the current research on deep learning.
·frontiersin.org·
Deep Learning in Music Recommendation Systems
EFloat: Entropy-coded Floating Point Format for Compressing Vector Embedding Models
EFloat: Entropy-coded Floating Point Format for Compressing Vector Embedding Models
In a large class of deep learning models, including vector embedding models such as word and database embeddings, we observe that floating point exponent values cluster around a few unique values, permitting entropy based data compression. Entropy coding compresses fixed-length values with variable-length codes, encoding most probable values with fewer bits. We propose the EFloat compressed floating point number format that uses a variable field boundary between the exponent and significand fields. EFloat uses entropy coding on exponent values and signs to minimize the average width of the exponent and sign fields, while preserving the original FP32 exponent range unchanged. Saved bits become part of the significand field increasing the EFloat numeric precision by 4.3 bits on average compared to other reduced-precision floating point formats. EFloat makes 8-bit and even smaller floats practical without sacrificing the exponent range of a 32-bit floating point representation. We currently use the EFloat format for saving memory capacity and bandwidth consumption of large vector embedding models such as those used for database embeddings. Using the RMS error as metric, we demonstrate that EFloat provides higher accuracy than other floating point formats with equal bit budget. The EF12 format with 12-bit budget has less end-to-end application error than the 16-bit BFloat16. EF16 with 16-bit budget has an RMS-error 17 to 35 times less than BF16 RMS-error for a diverse set of embedding models. When making similarity and dissimilarity queries, using the NDCG ranking metric, EFloat matches the result quality of prior floating point representations with larger bit budgets.
·arxiv.org·
EFloat: Entropy-coded Floating Point Format for Compressing Vector Embedding Models
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.
·arxiv.org·
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT