DensePose - WiFi
Advanced Biophysical Model to Capture Channel Variability for EQS Capacitive HBC
Human Body Communication (HBC) has come up as a promising alternative to
traditional radio frequency (RF) Wireless Body Area Network (WBAN)
technologies. This is essentially due to HBC providing a broadband
communication channel with enhanced signal security in the physical layer due
to lower radiation from the human body as compared to its RF counterparts. An
in-depth understanding of the mechanism for the channel loss variability and
associated biophysical model needs to be developed before EQS-HBC can be used
more frequently in WBAN consumer and medical applications. Biophysical models
characterizing the human body as a communication channel didn't exist in
literature for a long time. Recent developments have shown models that capture
the channel response for fixed transmitter and receiver positions on the human
body. These biophysical models do not capture the variability in the HBC
channel for varying positions of the devices with respect to the human body. In
this study, we provide a detailed analysis of the change in path loss in a
capacitive-HBC channel in the electroquasistatic (EQS) domain. Causes of
channel loss variability namely: inter-device coupling and effects of fringe
fields due to body's shadowing effects are investigated. FEM based simulation
results are used to analyze the channel response of human body for different
positions and sizes of the device which are further verified using measurement
results to validate the developed biophysical model. Using the bio-physical
model, we develop a closed form equation for the path loss in a capacitive HBC
channel which is then analyzed as a function of the geometric properties of the
device and the position with respect to the human body which will help pave the
path towards future EQSHBC WBAN design.
X-Risk Analysis for AI Research
Artificial intelligence (AI) has the potential to greatly improve society,
but as with any powerful technology, it comes with heightened risks and
responsibilities. Current AI research lacks a systematic discussion of how to
manage long-tail risks from AI systems, including speculative long-term risks.
Keeping in mind the potential benefits of AI, there is some concern that
building ever more intelligent and powerful AI systems could eventually result
in systems that are more powerful than us; some say this is like playing with
fire and speculate that this could create existential risks (x-risks). To add
precision and ground these discussions, we provide a guide for how to analyze
AI x-risk, which consists of three parts: First, we review how systems can be
made safer today, drawing on time-tested concepts from hazard analysis and
systems safety that have been designed to steer large processes in safer
directions. Next, we discuss strategies for having long-term impacts on the
safety of future systems. Finally, we discuss a crucial concept in making AI
systems safer by improving the balance between safety and general capabilities.
We hope this document and the presented concepts and tools serve as a useful
guide for understanding how to analyze AI x-risk.
Dan Hendrycks on Twitter
“More and more researchers think that building AIs smarter than us could pose existential risks. But what might these risks look like, and how can we manage them? We provide a guide to help analyze how research can reduce these risks.
Paper: https://t.co/SHCwaClRHA (🧵below)”
Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models
null
Reflexion: an autonomous agent with dynamic memory and self-reflection
Recent advancements in decision-making large language model (LLM) agents have demonstrated impressive performance across various benchmarks. However, these state-of-the-art approaches typically...
DensePose: Dense Human Pose Estimation In The Wild
In this work, we establish dense correspondences between RGB image and a surface-based representation of the human body, a task we refer to as dense human pose estimation. We first gather dense...
Generative AI; Relevance to Librarians | Lucidea
Generative AI can be used in libraries in databases, in the chat systems librarians use, and in cataloging.
In libraries, I think we will start seeing it in databases, in the chat systems we use, and in cataloging.
R47189
Interpretability in Machine Learning
Why we need to understand how our models make predictions
Navigating the Sea of Explainability
Setting the right course and steering responsibly
A Brief History of Machine Learning Models Explainability
If software ate the world, models will run it. But are we ready to be controlled by blackbox intelligent softwares?
The Design Space of Generative Models
null
The Design Space of Generative Models
Card et al.'s classic paper "The Design Space of Input Devices" established
the value of design spaces as a tool for HCI analysis and invention. We posit
that developing design spaces for emerging pre-trained, generative AI models is
necessary for supporting their integration into human-centered systems and
practices. We explore what it means to develop an AI model design space by
proposing two design spaces relating to generative AI models: the first
considers how HCI can impact generative models (i.e., interfaces for models)
and the second considers how generative models can impact HCI (i.e., models as
an HCI prototyping material).
This new technology could blow away GPT-4 and everything like it
The Hyena code is able to handle amounts of data that make GPT-style technology run out of memory and fail.
SamurAI: A Versatile IoT Node With Event-Driven Wake-Up and Embedded ML Acceleration
Increased capabilities such as recognition and self-adaptability are now
required from IoT applications. While IoT node power consumption is a major
concern for these applications, cloud-based processing is becoming
unsustainable due to continuous sensor or image data transmission over the
wireless network. Thus optimized ML capabilities and data transfers should be
integrated in the IoT node. Moreover, IoT applications are torn between
sporadic data-logging and energy-hungry data processing (e.g. image
classification). Thus, the versatility of the node is key in addressing this
wide diversity of energy and processing needs. This paper presents SamurAI, a
versatile IoT node bridging this gap in processing and in energy by leveraging
two on-chip sub-systems: a low power, clock-less, event-driven
Always-Responsive (AR) part and an energy-efficient On-Demand (OD) part. AR
contains a 1.7MOPS event-driven, asynchronous Wake-up Controller (WuC) with a
207ns wake-up time optimized for sporadic computing, while OD combines a
deep-sleep RISC-V CPU and 1.3TOPS/W Machine Learning (ML) for more complex
tasks up to 36GOPS. This architecture partitioning achieves best in class
versatility metrics such as peak performance to idle power ratio. On an
applicative classification scenario, it demonstrates system power gains, up to
3.5x compared to cloud-based processing, and thus extended battery lifetime.
We're Afraid Language Models Aren't Modeling Ambiguity
Ambiguity is an intrinsic feature of natural language. Managing ambiguity is
a key part of human language understanding, allowing us to anticipate
misunderstanding as communicators and revise our interpretations as listeners.
As language models (LMs) are increasingly employed as dialogue interfaces and
writing aids, handling ambiguous language is critical to their success. We
characterize ambiguity in a sentence by its effect on entailment relations with
another sentence, and collect AmbiEnt, a linguist-annotated benchmark of 1,645
examples with diverse kinds of ambiguity. We design a suite of tests based on
AmbiEnt, presenting the first evaluation of pretrained LMs to recognize
ambiguity and disentangle possible meanings. We find that the task remains
extremely challenging, including for the recent GPT-4, whose generated
disambiguations are considered correct only 32% of the time in human
evaluation, compared to 90% for disambiguations in our dataset. Finally, to
illustrate the value of ambiguity-sensitive tools, we show that a multilabel
NLI model can flag political claims in the wild that are misleading due to
ambiguity. We encourage the field to rediscover the importance of ambiguity for
NLP.
Hyperbolic Deep Reinforcement Learning
Many RL problems have hierarchical tree-like nature. Hyperbolic geometry offers a powerful prior for such problems.
EFloat: Entropy-coded Floating Point Format for Compressing Vector Embedding Models
In a large class of deep learning models, including vector embedding models
such as word and database embeddings, we observe that floating point exponent
values cluster around a few unique values, permitting entropy based data
compression. Entropy coding compresses fixed-length values with variable-length
codes, encoding most probable values with fewer bits. We propose the EFloat
compressed floating point number format that uses a variable field boundary
between the exponent and significand fields. EFloat uses entropy coding on
exponent values and signs to minimize the average width of the exponent and
sign fields, while preserving the original FP32 exponent range unchanged. Saved
bits become part of the significand field increasing the EFloat numeric
precision by 4.3 bits on average compared to other reduced-precision floating
point formats. EFloat makes 8-bit and even smaller floats practical without
sacrificing the exponent range of a 32-bit floating point representation. We
currently use the EFloat format for saving memory capacity and bandwidth
consumption of large vector embedding models such as those used for database
embeddings. Using the RMS error as metric, we demonstrate that EFloat provides
higher accuracy than other floating point formats with equal bit budget. The
EF12 format with 12-bit budget has less end-to-end application error than the
16-bit BFloat16. EF16 with 16-bit budget has an RMS-error 17 to 35 times less
than BF16 RMS-error for a diverse set of embedding models. When making
similarity and dissimilarity queries, using the NDCG ranking metric, EFloat
matches the result quality of prior floating point representations with larger
bit budgets.