Found 7 bookmarks
Custom sorting
Enter PaLM 2: Full Breakdown (92 Pages Read + Gemini Before GPT 5?)
Enter PaLM 2: Full Breakdown (92 Pages Read + Gemini Before GPT 5?)
Google puts it foot on the accelerator, casting aside safety concerns to not only release a GPT 4 -competitive model, PaLM 2, but also announce that they are already training Gemini, a GPT 5 competitor [likely on TPU v5 chips]. This is truly a major day in AI history, and I try to cover it all. I'll show the benchmarks in which PaLM (which now powers Bard) beats GPT 4, and detail how they use SmartGPT-like techniques to boost performance. Crazily enough, PaLM 2 beats even Google Translate, due in large part to the text it was trained on. We'll talk coding in Bard, translation, MMLU, Big Bench, and much more. I'll end on the Universal Translator deepfakes and the underwhelming results from Sundar Pichai and Sam Altman's trip to the White House and what Hinton says about it all. On a more positive note, I cover Med PaLM 2, which could genuinely save thousands of lives. PaLM 2 Technical Report: https://ai.google/static/documents/palm2techreport.pdf Release Notes Google Blog: https://blog.google/technology/ai/google-palm-2-ai-large-language-model/ Bard Access: https://bard.google.com/ Scaling Transformer to 1M tokens: https://arxiv.org/pdf/2304.11062.pdf GPT 4 Technical Report: https://arxiv.org/pdf/2303.08774.pdf Bard Languages: https://support.google.com/bard/answer/13575153?hl=en Self Consistency Paper: https://arxiv.org/pdf/2203.11171.pdf Are Emergent Abilities a Mirage: https://arxiv.org/pdf/2304.15004.pdf Sparks of AGI Paper: https://arxiv.org/pdf/2303.12712.pdf Big Bench Hard: https://github.com/suzgunmirac/BIG-Bench-Hard Google Keynote: https://www.youtube.com/watch?v=cNfINi5CNbY Gemini: https://www.youtube.com/watch?v=1UvUjTaJRz0 Med PaLM 2: https://www.youtube.com/watch?v=k_-Z_TkHMqA TPU v5: https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html Hinton Warning: https://www.youtube.com/watch?v=FAbsoxQtUwM White House Readout: https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/ https://www.patreon.com/AIExplained
·youtube.com·
Enter PaLM 2: Full Breakdown (92 Pages Read + Gemini Before GPT 5?)
On the Security Risks of Knowledge Graph Reasoning
On the Security Risks of Knowledge Graph Reasoning
Knowledge graph reasoning (KGR) -- answering complex logical queries over large knowledge graphs -- represents an important artificial intelligence task, entailing a range of applications (e.g., cyber threat hunting). However, despite its surging popularity, the potential security risks of KGR are largely unexplored, which is concerning, given the increasing use of such capability in security-critical domains. This work represents a solid initial step towards bridging the striking gap. We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors. Further, we present ROAR, a new class of attacks that instantiate a variety of such threats. Through empirical evaluation in representative use cases (e.g., medical decision support, cyber threat hunting, and commonsense reasoning), we demonstrate that ROAR is highly effective to mislead KGR to suggest pre-defined answers for target queries, yet with negligible impact on non-target ones. Finally, we explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries, which leads to several promising research directions.
·arxiv.org·
On the Security Risks of Knowledge Graph Reasoning
AutoML-GPT: Automatic Machine Learning with GPT
AutoML-GPT: Automatic Machine Learning with GPT
AI tasks encompass a wide range of domains and fields. While numerous AI models have been designed for specific tasks and applications, they often require considerable human efforts in finding the right model architecture, optimization algorithm, and hyperparameters. Recent advances in large language models (LLMs) like ChatGPT show remarkable capabilities in various aspects of reasoning, comprehension, and interaction. Consequently, we propose developing task-oriented prompts and automatically utilizing LLMs to automate the training pipeline. To implement this concept, we present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyperparameters. AutoML-GPT dynamically takes user requests from the model and data cards and composes the corresponding prompt paragraph. Ultimately, with this prompt paragraph, AutoML-GPT will automatically conduct the experiments from data processing to model architecture, hyperparameter tuning, and predicted training log. By leveraging {\ours}'s robust language capabilities and the available AI models, AutoML-GPT can tackle numerous intricate AI tasks across various tasks and datasets. This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many AI tasks.
·arxiv.org·
AutoML-GPT: Automatic Machine Learning with GPT