Public

Public

451 bookmarks
Custom sorting
ARM vs RISC-V: What Are the Major Differences?
ARM vs RISC-V: What Are the Major Differences?
What are the major differences between RISC-V and ARM, and will one win over the other?
CISC allows a computer to do more in a single instruction cycle, while RISC allows for simpler programming. Generally speaking, RISC requires more clock cycles to complete the same instruction in CISC but can do so more efficiently (energy-wise), making them ideal for mobile applications. While x86/x64 remains the dominant architecture in the heavy processing market, ARM may face serious competition from a new processor architecture, RISC-V.
·electropages.com·
ARM vs RISC-V: What Are the Major Differences?
ARM vs. RISC-V: Is one better than the other? | Digital Trends
ARM vs. RISC-V: Is one better than the other? | Digital Trends
If you wanted to make a CPU, there are two obvious choices: ARM and RISC-V. But what are the differences between the two, and is one better than the other?
ARM and RISC-V are instruction set architectures, or ISAs. The ISA is the foundation of a processor and is the most fundamental and basic component of any CPU. Both ISAs are reduced instruction set computer (or RISC) designs, meaning the base instructions the CPU has access to are inherently simple but ideally fast to calculate. The ‘R’ in ARM actually stands for RISC (though ARM is no longer treated as an acronym), so in this sense the two ISAs are similar.
·digitaltrends.com·
ARM vs. RISC-V: Is one better than the other? | Digital Trends
Hamiltonian path - Wikipedia
Hamiltonian path - Wikipedia
In the mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path in an undirected or directed graph that visits each vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a cycle that visits each vertex exactly once. A Hamiltonian path that starts and ends at adjacent vertices can be completed by adding one more edge to form a Hamiltonian cycle, and removing any edge from a Hamiltonian cycle produces a Hamiltonian path. Determining whether such paths and cycles exist in graphs (the Hamiltonian path problem and Hamiltonian cycle problem) are NP-complete
·en.wikipedia.org·
Hamiltonian path - Wikipedia
Zero-knowledge proof - Wikipedia
Zero-knowledge proof - Wikipedia
In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true while the prover avoids conveying any additional information apart from the fact that the statement is indeed true. The essence of zero-knowledge proofs is that it is trivial to prove that one possesses knowledge of certain information by simply revealing it; the challenge is to prove such possession without revealing the information itself or any additional information
·en.wikipedia.org·
Zero-knowledge proof - Wikipedia
Fermat's Library | Blue Zones: Lessons From the World's Longest Lived annotated/explained version.
Fermat's Library | Blue Zones: Lessons From the World's Longest Lived annotated/explained version.
Fermat's Library is a platform for illuminating academic papers.
This community of shepherds walk 5 mountainous miles a day or more. This natural movement provides all the positive cardiovascular benefits you might expect and also has a positive effect on muscle and bone metabolism without the point pounding of running marathon
·fermatslibrary.com·
Fermat's Library | Blue Zones: Lessons From the World's Longest Lived annotated/explained version.
Denoising Diffusion Probabilistic Models
Denoising Diffusion Probabilistic Models
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at https://github.com/hojonathanho/diffusion
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding
·arxiv.org·
Denoising Diffusion Probabilistic Models
The big boost: How incumbents successfully scale their new businesses
The big boost: How incumbents successfully scale their new businesses
Corporations can help their new ventures scale up if they avoid these six actions that can undermine success.
Successful start-ups focus from day one on the projected lifetime value for each targeted customer segment, and they review critical leading and lagging indicators every day. Noteworthy among high performers is their fixation on a single “star metric” that is most indicative of success for their business
·mckinsey.com·
The big boost: How incumbents successfully scale their new businesses
16 Startup Metrics | Andreessen Horowitz
16 Startup Metrics | Andreessen Horowitz
We have the privilege of meeting with thousands of entrepreneurs every year, and in the course of those discussions are presented with all kinds of numbers, measures, and metrics that illustrate the promise and health of a particular company. Sometimes, however, the metrics may not be the best gauge of what’s actually happening in the business, or people may use different definitions of the same metric in a way that makes it hard to understand the health of the business. So, while some of this may be obvious to many of you who live and breathe these metrics all day long, we compiled a list of the most common or confusing metrics. Where appropriate, we tried to add some notes on why investors focus on those metrics. Ultimately, though, good metrics aren’t about raising money from VCs -- they’re about running the business in a way where you know how and why certain things are working (or not), and can address or adjust accordingly. MORE
A common mistake is to use bookings and revenue interchangeably, but they aren’t the same thing. Bookings is the value of a contract between the company and the customer. It reflects a contractual obligation on the part of the customer to pay the company. Revenue is recognized when the service is actually provided or ratably over the life of the subscription agreement. How and when revenue is recognized is governed by GAAP.
Investors more highly value companies where the majority of total revenue comes from product revenue (vs. from services). Why? Services revenue is non-recurring, has much lower margins, and is less scalable. Product revenue is the what you generate from the sale of the software or product itself. ARR (annual recurring revenue) is a measure of revenue components that are recurring in nature. It should exclude one-time (non-recurring) fees and professional service fees. ARR per customer: Is this flat or growing? If you are upselling or cross-selling your customers, then it should be growing, which is a positive indicator for a healthy business. MRR (monthly recurring revenue): Often, people will multiply one month’s all-in bookings by 12 to get to ARR. Common mistakes with this method include: (1) counting non-recurring fees such as hardware, setup, installation, professional services/ consulting agreements; (2) counting bookings (see #1).
While top-line bookings growth is super important, investors want to understand how profitable that revenue stream is. Gross profit provides that measure. What’s included in gross profit may vary by company, but in general all costs associated with the manufacturing, delivery, and support of a product/service should be included.
TCV (total contract value) is the total value of the contract, and can be shorter or longer in duration. Make sure TCV also includes the value from one-time charges, professional service fees, and recurring charges.    ACV (annual contract value), on the other hand, measures the value of the contract over a 12-month period. Questions to ask about ACV:   What is the size? Are you getting a few hundred dollars per month from your customers, or are you able to close large deals? Of course, this depends on the market you are targeting (SMB vs. mid-market vs. enterprise). Is it growing (and especially not shrinking)? If it’s growing, it means customers are paying you more on average for your product over time. That implies either your product is fundamentally doing more (adding features and capabilities) to warrant that increase, or is delivering so much value customers (improved functionality over alternatives) that they are willing to pay more for it.
Lifetime value is the present value of the future net profit from the customer over the duration of the relationship. It helps determine the long-term value of the customer and how much net value you generate per customer after accounting for customer acquisition costs (CAC). A common mistake is to estimate the LTV as a present value of revenue or even gross margin of the customer instead of calculating it as net profit of the customer over the life of the relationship.
GMV (gross merchandise volume) is the total sales dollar volume of merchandise transacting through the marketplace in a specific period. It’s the real top line, what the consumer side of the marketplace is spending. It is a useful measure of the size of the marketplace and can be useful as a “current run rate” measure based on annualizing the most recent month or quarter. Revenue is the portion of GMV that the marketplace “takes”. Revenue consists of the various fees that the marketplace gets for providing its services; most typically these are transaction fees based on GMV successfully transacted on the marketplace, but can also include ad revenue, sponsorships, etc. These fees are usually a fraction of GMV.
A good proxy to measure the growth — and ultimately the health — of a SaaS company is to look at billings, which is calculated by taking the revenue in one quarter and adding the change in deferred revenue from the prior quarter to the current quarter. If a SaaS company is growing its bookings (whether through new business or upsells/renewals to existing customers), billings will increase.
Customer acquisition cost or CAC should be the full cost of acquiring users, stated on a per user basis. Unfortunately, CAC metrics come in all shapes and sizes. One common problem with CAC metrics is failing to include all the costs incurred in user acquisition such as referral fees, credits, or discounts. Another common problem is to calculate CAC as a “blended” cost (including users acquired organically) rather than isolating users acquired through “paid” marketing. While blended CAC [total acquisition cost / total new customers acquired across all channels] isn’t wrong, it doesn’t inform how well your paid campaigns are working and whether they’re profitable.
Burn rate is the rate at which cash is decreasing. Especially in early stage startups, it’s important to know and monitor burn rate as companies fail when they are running out of cash and don’t have enough time left to raise funds or reduce expenses
·a16z.com·
16 Startup Metrics | Andreessen Horowitz
ACV (Average Contract Value) vs ARR (Average Recurring Revenue): How to use them?
ACV (Average Contract Value) vs ARR (Average Recurring Revenue): How to use them?
ACV (Average Contract Value) and ARR (Average Recurring Revenue) are two of the most crucial revenue metrics, Learn how to apply them both in your subscription business.
ACV or Annual Contract Value is a revenue metric that describes the amount of revenue you receive from a given customer each year. ARR or Annual Recurring Revenue is also a revenue metric that describes the amount of revenue you can expect to receive from your existing clients in a given year.
·chargebee.com·
ACV (Average Contract Value) vs ARR (Average Recurring Revenue): How to use them?
ProDiff: Progressive Fast Diffusion Model For High-Quality Text-to-Speech
ProDiff: Progressive Fast Diffusion Model For High-Quality Text-to-Speech
Denoising diffusion probabilistic models (DDPMs) have recently achieved leading performances in many generative tasks. However, the inherited iterative sampling process costs hinder their applications to text-to-speech deployment. Through the preliminary study on diffusion model parameterization, we find that previous gradient-based TTS models require hundreds or thousands of iterations to guarantee high sample quality, which poses a challenge for accelerating sampling. In this work, we propose ProDiff, on progressive fast diffusion model for high-quality text-to-speech. Unlike previous work estimating the gradient for data density, ProDiff parameterizes the denoising model by directly predicting clean data to avoid distinct quality degradation in accelerating sampling. To tackle the model convergence challenge with decreased diffusion iterations, ProDiff reduces the data variance in the target site via knowledge distillation. Specifically, the denoising model uses the generated mel-spectrogram from an N-step DDIM teacher as the training target and distills the behavior into a new model with N/2 steps. As such, it allows the TTS model to make sharp predictions and further reduces the sampling time by orders of magnitude. Our evaluation demonstrates that ProDiff needs only 2 iterations to synthesize high-fidelity mel-spectrograms, while it maintains sample quality and diversity competitive with state-of-the-art models using hundreds of steps. ProDiff enables a sampling speed of 24x faster than real-time on a single NVIDIA 2080Ti GPU, making diffusion models practically applicable to text-to-speech synthesis deployment for the first time. Our extensive ablation studies demonstrate that each design in ProDiff is effective, and we further show that ProDiff can be easily extended to the multi-speaker setting. Audio samples are available at \url{https://ProDiff.github.io/.}
Denoising diffusion probabilistic models (DDPMs) have recently achieved leading performances in many generative tasks. However, the inherited iterative sampling process costs hinder their applications to text-to-speech deployment. Through the preliminary study on diffusion model parameterization, we find that previous gradient-based TTS models require hundreds or thousands of iterations to guarantee high sample quality, which poses a challenge for accelerating sampling. In this work, we propose ProDiff, on progressive fast diffusion model for high-quality text-to-speech. Unlike previous work estimating the gradient for data density, ProDiff parameterizes the denoising model by directly predicting clean data to avoid distinct quality degradation in accelerating sampling. To tackle the model convergence challenge with decreased diffusion iterations, ProDiff reduces the data variance in the target site via knowledge distillation
·arxiv.org·
ProDiff: Progressive Fast Diffusion Model For High-Quality Text-to-Speech
A Generalist Neural Algorithmic Learner
A Generalist Neural Algorithmic Learner
The cornerstone of neural algorithmic reasoning is the ability to solve algorithmic tasks, especially in a way that generalises out of distribution. While recent years have seen a surge in methodological improvements in this area, they mostly focused on building specialist models. Specialist models are capable of learning to neurally execute either only one algorithm or a collection of algorithms with identical control-flow backbone. Here, instead, we focus on constructing a generalist neural algorithmic learner -- a single graph neural network processor capable of learning to execute a wide range of algorithms, such as sorting, searching, dynamic programming, path-finding and geometry. We leverage the CLRS benchmark to empirically show that, much like recent successes in the domain of perception, generalist algorithmic learners can be built by "incorporating" knowledge. That is, it is possible to effectively learn algorithms in a multi-task manner, so long as we can learn to execute them well in a single-task regime. Motivated by this, we present a series of improvements to the input representation, training regime and processor architecture over CLRS, improving average single-task performance by over 20% from prior art. We then conduct a thorough ablation of multi-task learners leveraging these improvements. Our results demonstrate a generalist learner that effectively incorporates knowledge captured by specialist models.
The cornerstone of neural algorithmic reasoning is the ability to solve algorithmic tasks, especially in a way that generalises out of distribution. While recent years have seen a surge in methodological improvements in this area, they mostly focused on building specialist models. Specialist models are capable of learning to neurally execute either only one algorithm or a collection of algorithms with identical control-flow backbone. Here, instead, we focus on constructing a generalist neural algorithmic learner -- a single graph neural network processor capable of learning to execute a wide range of algorithms, such as sorting, searching, dynamic programming, path-finding and geometry. We leverage the CLRS benchmark to empirically show that, much like recent successes in the domain of perception, generalist algorithmic learners can be built by "incorporating" knowledge. That is, it is possible to effectively learn algorithms in a multi-task manner, so long as we can learn to execute them well in a single-task regime. Motivated by this, we present a series of improvements to the input representation, training regime and processor architecture over CLRS, improving average single-task performance by over 20% from prior art. We then conduct a thorough ablation of multi-task learners leveraging these improvements. Our results demonstrate a generalist learner that effectively incorporates knowledge captured by specialist models.
·arxiv.org·
A Generalist Neural Algorithmic Learner
Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
We present a hierarchical learning framework, named PRELUDE, which decomposes the problem of perceptive locomotion into high-level decision-making to predict navigation commands and low-level gait generation to realize the target commands. In this framework, we train the high-level navigation controller with imitation learning on human demonstrations collected on a steerable cart and the low-level gait controller with reinforcement learning (RL). Therefore, our method can acquire complex navigation behaviors from human supervision and discover versatile gaits from trial and error
·ut-austin-rpl.github.io·
Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments
NLMap-SayCan
NLMap-SayCan
Project page for Open-vocabulary Queryable Scene Representations for Real World Planning
In this paper, we develop NLMap, an open-vocabulary and queryable scene representation to address this problem. NLMap serves as a framework to gather and integrate contextual information into LLM planners, allowing them to see and query available objects in the scene before generating a context-conditioned plan. NLMap first establishes a natural language queryable scene representation with Visual Language models (VLMs). An LLM based object proposal module parses instructions and proposes involved objects to query the scene representation for object availability and location. An LLM planner then plans with such information about the scene. NLMap allows robots to operate without a fixed list of objects nor executable options, enabling real robot operation unachievable by previous methods.
·nlmap-saycan.github.io·
NLMap-SayCan
Pure Transformers are Powerful Graph Learners
Pure Transformers are Powerful Graph Learners
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice. Given a graph, we simply treat all nodes and edges as independent tokens, augment them with token embeddings, and feed them to a Transformer. With an appropriate choice of token embeddings, we prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers, which is already more expressive than all message-passing Graph Neural Networks (GNN). When trained on a large-scale graph dataset (PCQM4Mv2), our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results compared to Transformer variants with sophisticated graph-specific inductive bias. Our implementation is available at https://github.com/jw9730/tokengt.
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice. Given a graph, we simply treat all nodes and edges as independent tokens, augment them with token embeddings, and feed them to a Transformer. With an appropriate choice of token embeddings, we prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers, which is already more expressive than all message-passing Graph Neural Networks (GNN). When trained on a large-scale graph dataset (PCQM4Mv2), our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results compared to Transformer variants with sophisticated graph-specific inductive bias. Our implementation is available at this https URL.
·arxiv.org·
Pure Transformers are Powerful Graph Learners
Which Current Unicorn Companies Reached $1 Billion the Fastest? | ZenBusiness Inc.
Which Current Unicorn Companies Reached $1 Billion the Fastest? | ZenBusiness Inc.
Here are the 959 companies classified as unicorns at the start of 2022 and their growth according to speed. Which one was the first to $1B?
Blitzscaling became the name of the game. This method of “high-impact entrepreneurship” involves growing as big as possible as fast as possible to squash out the competition. With low overheads and a get-big-fast strategy, digital start-ups began hitting billion-dollar valuations long before it made sense to cash in. And, in 2013, investor Aileen Lee christened these ever more common billion-dollar companies: unicorns. Like any designer animal, the unicorn comes with troubling genetic tendencies. Aggressive growth does indeed stunt the competition – warping the market and the culture as ideas and opportunities crumble in the unicorn’s wake. The “reality distortion field” through which the unicorn convinces investors of their vision is prone to fade, revealing insurmountable obstacles. And for the unicorn itself, growing so big so fast is a strain that causes many to collapse.
·zenbusiness.com·
Which Current Unicorn Companies Reached $1 Billion the Fastest? | ZenBusiness Inc.
Dunbar's number - Wikipedia
Dunbar's number - Wikipedia
Dunbar's number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships—relationships in which an individual knows who each person is and how each person relates to every other person.[1][2] This number was first proposed in the 1990s by British anthropologist Robin Dunbar, who found a correlation between primate brain size and average social group size.[3] By using the average human brain size and extrapolating from the results of primates, he proposed that humans can comfortably maintain 150 stable relationships
·en.wikipedia.org·
Dunbar's number - Wikipedia
Nootropic - Wikipedia
Nootropic - Wikipedia
Nicotine – a meta-analysis of 41 clinical studies concluded that nicotine administration or smoking improves alerting and orienting attention and episodic and working memory and slightly improves fine motor performance
A 2016 review reported that theanine may increase alpha waves in the brain. Alpha waves may contribute to a relaxed yet alert mental state.[50] Another study has shown that an oral administration of theanine at doses of 50–200 mg promoted "relaxation without causing drowsiness" within 40 mins after consumption.[51] A 2014 systematic review and meta-analysis found that concurrent caffeine and L-theanine use had synergistic psychoactive effects that promoted alertness, attention, and task switching. These effects were most pronounced during the first hour post-dose
Racetams, such as piracetam, oxiracetam, phenylpiracetam, and aniracetam, are often marketed as cognitive enhancers and sold over the counter.[52] A 2019 study found that piracetam supplements sold in the United States were inaccurately labeled.[52] Racetams are often referred to as nootropics, but this property is not well established.[53] The racetams have poorly understood mechanisms, although piracetam and aniracetam are known to act as positive allosteric modulators of AMPA receptors and appear to modulate cholinergic systems
·en.wikipedia.org·
Nootropic - Wikipedia
Piracetam - Wikipedia
Piracetam - Wikipedia
Piracetam is a drug marketed as a treatment for myoclonus.[3] It is also used as a cognitive enhancer to improve memory, attention, and learning.[4][5][6][7][8] Evidence to support its use is unclear, with some studies showing modest benefits in specific populations and others showing minimal or no benefit.[9][10] Piracetam is sold as a medication in many European countries. Sale of piracetam is not illegal in the United States, although it is not regulated nor approved by the FDA so it is legally sold for research use only.[4]
·en.wikipedia.org·
Piracetam - Wikipedia
Test-Time Prompt Tuning for Zero-shot Generalization in Vision-Language Models
Test-Time Prompt Tuning for Zero-shot Generalization in Vision-Language Models
Pre-trained vision-language models (e.g., CLIP) have shown impressive zero-shot generalization in various downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using training data from downstream tasks, but this can be expensive and hard to generalize to new tasks and distributions. To this end, we propose test-time prompt tuning (TPT) as the first prompt tuning method that can learn adaptive prompts on the fly with a single test sample. TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample
·azshue.github.io·
Test-Time Prompt Tuning for Zero-shot Generalization in Vision-Language Models
What does "stationary" mean in the context of reinforcement learning?
What does "stationary" mean in the context of reinforcement learning?
I think I've seen the expressions "stationary data", "stationary dynamics" and "stationary policy", among others, in the context of reinforcement learning. What does it mean? I think stationary pol...
A stationary policy is a policy that does not change. Although strictly that is a time-dependent issue, that is not what the distinction refers to in reinforcement learning. It generally means that the policy is not being updated by a learning algorithm
·ai.stackexchange.com·
What does "stationary" mean in the context of reinforcement learning?
re:Work - The five keys to a successful Google team
re:Work - The five keys to a successful Google team
Pod. Work group. Committee. Autonomous collective. Whatever you call it, you’re part of one at Google and probably wherever you work: a team. So if we know what makes managers great, why don’t we know what makes a team great?
We learned that there are five key dynamics that set successful teams apart from other teams at Google: Psychological safety: Can we take risks on this team without feeling insecure or embarrassed? Dependability: Can we count on each other to do high quality work on time? Structure & clarity: Are goals, roles, and execution plans on our team clear? Meaning of work: Are we working on something that is personally important for each of us? Impact of work: Do we fundamentally believe that the work we’re doing matters?
·rework.withgoogle.com·
re:Work - The five keys to a successful Google team