Found 101 bookmarks
Newest
Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks | Research
Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks | Research
Deep Neural Networks (DNNs) have reinvigorated real-world applications that rely on learning patterns of data and are permeating into different industries and markets. Cloud infrastructure and accelerators that offer INFerence-as-a-Service (INFaaS) have become the enabler of this rather quick and invasive shift in the industry. To that end, mostly acceleratorbased INFaaS (Google’s TPU [1], NVIDIA T4 [2], Microsoft Brainwave [3], etc.) has become the backbone of many real-life applications.
·research.nvidia.com·
Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks | Research
Dynamic neural network approach to targeted balance assessment of individuals with and without neurological disease during non-steady-state locomotion | Journal of NeuroEngineering and Rehabilitation | Full Text
Dynamic neural network approach to targeted balance assessment of individuals with and without neurological disease during non-steady-state locomotion | Journal of NeuroEngineering and Rehabilitation | Full Text
Background Clinical balance assessments often rely on functional tasks as a proxy for balance (e.g., Timed Up and Go). In contrast, analyses of balance in research settings incorporate quantitative biomechanical measurements (e.g., whole-body angular momentum, H) using motion capture techniques. Fully instrumenting patients in the clinic is not feasible, and thus it is desirable to estimate biomechanical quantities related to balance from measurements taken from a subset of the body segments. Machine learning algorithms are well-suited for this type of low- to high-dimensional mapping. Thus, our goal was to develop and test an artificial neural network that to predict segment contributions to whole-body angular momentum from linear acceleration and angular velocity signals (i.e., those typically available to wearable inertial measurement units, IMUs) taken from a sparse set of body segments. Methods Optical motion capture data were collected from five able-bodied individuals and five individuals with Parkinson's disease (PD) walking on a non-steady-state locomotor circuit comprising stairs, ramps and changes of direction. Motion data were used to calculate angular momentum (i.e., “gold standard” output data) and body-segment linear acceleration and angular velocity data from local reference frames at the wrists, ankles and neck (i.e., network input). A dynamic nonlinear autoregressive neural network was trained using the able-bodied data (pooled across subjects). The neural network was tested on data from individuals with PD with noise added to simulate real-world IMU data. Results Correlation coefficients of the predicted segment contributions to whole-body angular momentum with the gold standard data were 0.989 for able-bodied individuals and 0.987 for individuals with PD. Mean RMS errors were between 2 and 7% peak signal magnitude for all body segments during completion of the locomotor circuits. Conclusion Our results suggest that estimating segment contributions to angular momentum from mechanical signals (linear acceleration, angular velocity) from a sparse set of body segments is a feasible method for assessing coordination of balance—even using a network trained on able-bodied data to assess individuals with neurological disease. These targeted estimates of segmental momenta could potentially be delivered to clinicians using a sparse sensor set (and likely in real-time) in order to enhance balance rehabilitation of people with PD.
·jneuroengrehab.biomedcentral.com·
Dynamic neural network approach to targeted balance assessment of individuals with and without neurological disease during non-steady-state locomotion | Journal of NeuroEngineering and Rehabilitation | Full Text
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCLkCEAE&url=https%3A%2F%2Fwww.cs.ucr.edu%2F~nael%2Fpubs%2Fmicro18.pdf&usg=AOvVaw2Eal3nWFqT7rKtPM35lWmZ&opi=89978449
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCLkCEAE&url=https%3A%2F%2Fwww.cs.ucr.edu%2F~nael%2Fpubs%2Fmicro18.pdf&usg=AOvVaw2Eal3nWFqT7rKtPM35lWmZ&opi=89978449
·google.com·
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCLkCEAE&url=https%3A%2F%2Fwww.cs.ucr.edu%2F~nael%2Fpubs%2Fmicro18.pdf&usg=AOvVaw2Eal3nWFqT7rKtPM35lWmZ&opi=89978449
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCOgBEAE&url=https%3A%2F%2Fwww.ecva.net%2Fpapers%2Feccv_2022%2Fpapers_ECCV%2Fpapers%2F136640628.pdf&usg=AOvVaw2b7ApLuzeVyM1WPUjWE86e&opi=89978449
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCOgBEAE&url=https%3A%2F%2Fwww.ecva.net%2Fpapers%2Feccv_2022%2Fpapers_ECCV%2Fpapers%2F136640628.pdf&usg=AOvVaw2b7ApLuzeVyM1WPUjWE86e&opi=89978449
·google.com·
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCOgBEAE&url=https%3A%2F%2Fwww.ecva.net%2Fpapers%2Feccv_2022%2Fpapers_ECCV%2Fpapers%2F136640628.pdf&usg=AOvVaw2b7ApLuzeVyM1WPUjWE86e&opi=89978449
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCI0BEAE&url=https%3A%2F%2Fopenaccess.thecvf.com%2Fcontent%2FCVPR2022%2Fpapers%2FLiu_Instance-Aware_Dynamic_Neural_Network_Quantization_CVPR_2022_paper.pdf&usg=AOvVaw1t-GfCzIS8sgWe0c1RHp1z&opi=89978449
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCI0BEAE&url=https%3A%2F%2Fopenaccess.thecvf.com%2Fcontent%2FCVPR2022%2Fpapers%2FLiu_Instance-Aware_Dynamic_Neural_Network_Quantization_CVPR_2022_paper.pdf&usg=AOvVaw1t-GfCzIS8sgWe0c1RHp1z&opi=89978449
·google.com·
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCI0BEAE&url=https%3A%2F%2Fopenaccess.thecvf.com%2Fcontent%2FCVPR2022%2Fpapers%2FLiu_Instance-Aware_Dynamic_Neural_Network_Quantization_CVPR_2022_paper.pdf&usg=AOvVaw1t-GfCzIS8sgWe0c1RHp1z&opi=89978449
[2304.12214] Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
[2304.12214] Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
Biologically inspired Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence through event-driven...
·arxiv.org·
[2304.12214] Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
DyPipe: A Holistic Approach to Accelerating Dynamic Neural Networks with Dynamic Pipelining
DyPipe: A Holistic Approach to Accelerating Dynamic Neural Networks with Dynamic Pipelining
Dynamic neural network (NN) techniques are increasingly important because they facilitate deep learning techniques with more complex network architectures. However, existing studies, which predominantly optimize the static computational graphs by static scheduling methods, usually focus on optimizing static neural networks in deep neural network (DNN) accelerators. We analyze the execution process of dynamic neural networks and observe that dynamic features introduce challenges for efficient scheduling and pipelining in existing DNN accelerators. We propose DyPipe, a holistic approach to optimizing dynamic neural network inferences in enhanced DNN accelerators. DyPipe achieves significant performance improvements for dynamic neural networks while it introduces negligible overhead for static neural networks. Our evaluation demonstrates that DyPipe achieves 1.7x speed up on dynamic neural networks and maintains more than 96% performance for static neural networks.
·jcst.ict.ac.cn·
DyPipe: A Holistic Approach to Accelerating Dynamic Neural Networks with Dynamic Pipelining
GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration
GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Tensors and Dynamic neural networks in Python with strong GPU acceleration
·github.com·
GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration
DPACS: Hardware Accelerated Dynamic Neural Network Pruning through Algorithm-Architecture Co-design | Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2
DPACS: Hardware Accelerated Dynamic Neural Network Pruning through Algorithm-Architecture Co-design | Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2
·dl.acm.org·
DPACS: Hardware Accelerated Dynamic Neural Network Pruning through Algorithm-Architecture Co-design | Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCIwBEAE&url=https%3A%2F%2Fpar.nsf.gov%2Fservlets%2Fpurl%2F10295349&usg=AOvVaw17Tp5c1cy720Tw2KHidRSD&opi=89978449
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCIwBEAE&url=https%3A%2F%2Fpar.nsf.gov%2Fservlets%2Fpurl%2F10295349&usg=AOvVaw17Tp5c1cy720Tw2KHidRSD&opi=89978449
·google.com·
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiOhf-x18SBAxXsGFkFHejKD5QQFnoFCIwBEAE&url=https%3A%2F%2Fpar.nsf.gov%2Fservlets%2Fpurl%2F10295349&usg=AOvVaw17Tp5c1cy720Tw2KHidRSD&opi=89978449
"Dynamic Neural Accelerator and MERA Compiler for Low-latency and Energy-efficient Inference at the Edge," a Presentation from EdgeCortix - Edge AI and Vision Alliance
"Dynamic Neural Accelerator and MERA Compiler for Low-latency and Energy-efficient Inference at the Edge," a Presentation from EdgeCortix - Edge AI and Vision Alliance
Sakyasingha Dasgupta, the founder and CEO of EdgeCortix, presents the “Dynamic Neural Accelerator and MERA Compiler for Low-latency and Energy-efficient
·edge-ai-vision.com·
"Dynamic Neural Accelerator and MERA Compiler for Low-latency and Energy-efficient Inference at the Edge," a Presentation from EdgeCortix - Edge AI and Vision Alliance
Protect your brain from stress - Harvard Health
Protect your brain from stress - Harvard Health
Stress can affect your memory and cognition and put you at higher risk for Alzheimer’s disease and dementia. Stress management tools can help reduce this risk....
·health.harvard.edu·
Protect your brain from stress - Harvard Health
Neuroscience For Kids - action potential
Neuroscience For Kids - action potential
Intended for elementary and secondary school students and teachers who are interested in learning about the nervous system and brain with hands on activities, experiments and information.
·faculty.washington.edu·
Neuroscience For Kids - action potential
Your Anxious Brain (and How to Rewire It) - Dr Nathan Brandon
Your Anxious Brain (and How to Rewire It) - Dr Nathan Brandon
Anxiety is a normal human emotion. We all feel it from time to time. However, when anxiety becomes excessive or chronic, it can have very serious consequences on your physical and mental health. Anxiety is an umbrella term used to describe a collection of disorders that are characterized by feelings of fear, dread, and uneasiness. […]
·drnathanbrandon.com·
Your Anxious Brain (and How to Rewire It) - Dr Nathan Brandon
The book of neurogenesis - Harvard Health
The book of neurogenesis - Harvard Health
Scientists are looking at why later-life neurogenesis—the brain's ability to produce new neurons—primarily happens in the hippocampus, the region responsible for learning information and storin...
·health.harvard.edu·
The book of neurogenesis - Harvard Health
9 neuroplasticity exercises to boost productivity - Work Life by Atlassian
9 neuroplasticity exercises to boost productivity - Work Life by Atlassian
Neuroplasticity is the brain’s ability to learn and adapt. Here's how to rewire your brain to optimize your cognitive performance.
·atlassian.com·
9 neuroplasticity exercises to boost productivity - Work Life by Atlassian
12 ways to keep your brain young - Harvard Health
12 ways to keep your brain young - Harvard Health
Mental decline is common, and it's one of the most feared consequences of aging. But cognitive impairment is not inevitable. Here are 12 ways you can help reduce your risk of age-related memory los...
·health.harvard.edu·
12 ways to keep your brain young - Harvard Health
EdgeCortix Announces Sakura AI Co-Processor
EdgeCortix Announces Sakura AI Co-Processor
TOKYO, April 22, 2022 — EdgeCortix Inc., the innovative fabless semiconductor design company with a software first approach, focused on delivering class-leading compute efficiency and latency for edge artificial intelligence (AI) inference; […]
·hpcwire.com·
EdgeCortix Announces Sakura AI Co-Processor
Obstruction simulation in real-time 3D audio on edge systems | IEEE Conference Publication | IEEE Xplore
Obstruction simulation in real-time 3D audio on edge systems | IEEE Conference Publication | IEEE Xplore
After the COVID-induced lock-downs, augmented/virtual reality turned from leisure to desired reality. Real-time 3D audio is a crucial enabler for these technologies. Nevertheless, systems offering object spatialization in 3D audio fall in two limited cases. They either require long-running pre-renders or involve powerful computing platforms. Furthermore, they mainly focus on active audio sources, while humans rely on the sound's interactions with passive obstructions to sense their environment. We propose a hardware co-processor for real-time 3D audio spatialization supporting passive obstructions. Our solution attains similar latency w.r.t. workstations while draining a tenth of the power, making it suitable for embedded applications.
·ieeexplore.ieee.org·
Obstruction simulation in real-time 3D audio on edge systems | IEEE Conference Publication | IEEE Xplore
Rajesh Rao receives Weill Neurohub grant to develop a ‘brain co-processor’ | UW Department of Electrical & Computer Engineering
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjGyeLU1cSBAxUhGVkFHTLqCSQQFnoFCJ4CEAE&url=https%3A%2F%2Fnyuad.nyu.edu%2Fcontent%2Fdam%2Fnyuad%2Fnews%2Fnews%2F2019%2Fjune%2FCoPHEE-report.pdf&usg=AOvVaw1hl63t-3jv05Zz_-A5QjSJ&opi=89978449
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjGyeLU1cSBAxUhGVkFHTLqCSQQFnoFCJ4CEAE&url=https%3A%2F%2Fnyuad.nyu.edu%2Fcontent%2Fdam%2Fnyuad%2Fnews%2Fnews%2F2019%2Fjune%2FCoPHEE-report.pdf&usg=AOvVaw1hl63t-3jv05Zz_-A5QjSJ&opi=89978449
·google.com·
google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjGyeLU1cSBAxUhGVkFHTLqCSQQFnoFCJ4CEAE&url=https%3A%2F%2Fnyuad.nyu.edu%2Fcontent%2Fdam%2Fnyuad%2Fnews%2Fnews%2F2019%2Fjune%2FCoPHEE-report.pdf&usg=AOvVaw1hl63t-3jv05Zz_-A5QjSJ&opi=89978449
Co-processor for upcoming season for vision - Technical / Programming - Chief Delphi
Co-processor for upcoming season for vision - Technical / Programming - Chief Delphi
Hello everyone, I have been doing a lot of reading about Apriltags for the upcoming season. the point of this is to hopefully get vision added to our robot. With RPIs being difficult to get a hold of (i hear, haven’t tried), and some poeple’s concerns about them being underpowered for Apriltags, what would everyone’s recommendations be? an alternative co-processor?
·chiefdelphi.com·
Co-processor for upcoming season for vision - Technical / Programming - Chief Delphi