Readme
Readme
Writing and Speaking with Style Benjamin C. Pierce and Rajeev Alur University of Pennsylvania This is the course webpage for CIS8100, a semester-long course on technical writing and speaking for PhD students in engineering and the sciences at the University of Pennsylvania, with most examples dr...
·docs.google.com·
Readme
New Stack Maps for Wasmtime and Cranelift
New Stack Maps for Wasmtime and Cranelift
As part of implementing the WebAssembly garbage collection proposal in Wasmtime,which is an ongoing process, we’ve overhauled the stack map infrastructure inCranelift. This post will explain what stack maps are, why we needed to changeth...
·bytecodealliance.org·
New Stack Maps for Wasmtime and Cranelift
Union Types in Rust with Type-Level Lists | Shun Kashiwa's Blog
Union Types in Rust with Type-Level Lists | Shun Kashiwa's Blog
In this article, I will discuss a technique to represent union types in Rust. With type-level lists, we can express a set of types, and through trait resolution, determine if a particular type is part of a set or if one set is a subset of another. The core of this technique is a recursive operation on type-level lists. To avoid conflicting implementations of traits, we will add a marker type that express the depth of recursion.
·blog.shun-k.dev·
Union Types in Rust with Type-Level Lists | Shun Kashiwa's Blog
Programming ZKPs: From Zero to Hero
Programming ZKPs: From Zero to Hero
Learn to write and modify Zero Knowledge Proofs from scratch. You'll build a digital signature scheme using hash-based commitments, gaining practical ZKP programming skills and intuition along the way. By the end, you'll have all the tools you need to implement things like group signatures.
·zkintro.com·
Programming ZKPs: From Zero to Hero
Controlling Queue Delay - ACM Queue
Controlling Queue Delay - ACM Queue
Nearly three decades after it was first diagnosed, the "persistently full buffer problem" recently exposed as part of "bufferbloat", is still with us and made increasingly critical by two trends. First, cheap memory and a "more is better" mentality have led to the inflation and proliferation of buffers. Second, dynamically varying path characteristics are much more common today and are the norm at the consumer Internet edge. Reasonably sized buffers become extremely oversized when link rates and path delays fall below nominal values.
·queue.acm.org·
Controlling Queue Delay - ACM Queue
The plan-execute pattern
The plan-execute pattern
A ubiquitous pattern you won’t find in your textbook.
·mmapped.blog·
The plan-execute pattern
Basic memory allocators explained in 10 minutes
Basic memory allocators explained in 10 minutes
There's a lot of crap out there about memory allocators. Try searching for explanations about slab allocators and there'll be a lot of prattling on about OS resources, which is something slab allocators can be used for, but it's not what they are. So I'm going to quickly cover the basics of several simple memory allocators! Bump allocator The bump allocator is the fastest allocator there is. All it does is give you a pointer to where it currently is, then advances the pointer by the requested size. Super simple. It can't free memory however. Bump allocators are useful because you can give them a fixed size chunk to allocate from and then reset them at some point where you know you're done with all the resources. For example, in gamedev you can allocate stuff into it that will only last until the end of the frame, then reset it. Very fast, very efficient, not great for general purpose. Buddy allocator A buddy allocator is assigned some power of two memory region and then creates a binary tree to allocate the smallest power of two size memory that will fit, often a minimum size of 4kb. So if you have a completely clear buddy allocator of 256k and you request 16k, it'll split into two chunks of 128k, then split one of those into two of 64k, then 32k, then 16k, and allocate one. When freeing you check if the buddy of the freed chunk is free, if it is you merge the chunks back up into the larger size. Buddy allocators are relatively simple and have the advantage that you can store a bitfield outside of the memory it manages of which chunks at all levels are free/allocated so that the actual memory inside the allocator remains contiguous. Slab allocator Don't believe anything you read on Google about OS resources and other bullshit, a slab allocator is literally just a fucking array with a singly linked list pointing at the next free item. That is, slab allocators are meant to give you single elements of one size, which often means they're used for one specific type of item. They're very fast and efficient and unlike a bump allocator can free items, but do need to maintain a linked list and are limited by a single size of allocation for a single item. You can fancy these up by creating an additional linked list that points at multiple buffers rather than using a single buffer, so you can dynamically grow and shrink the amount of memory the allocator is using. Freelist allocator By far the most complex, a freelist takes a region of memory and divides it up by chunks that fit the requested allocations, keeping a linked list of free chunks (hence the name) so that it only has to search that list to find a chunk that fits when allocating. When previously allocated chunks are freed they're merged with any free chunks that neighbour them. The tricky bit is that since you're building a memory allocated you generally don't have a linked list implementation just sitting around (cause that requires a memory allocator, wahey!) so you have to implement the list inside the memory region that holds the allocations. This can be done many ways, but the easiest is for chunks to add a small section of memory before the memory intended to be used by the calling code, and storing whether the chunk is free or allocated, its size (including the header) so we can reach the next chunk, and: * if an allocated block, the size of the previous block, so it's easier to merge with a preceding block of free memory when being freed * if a free block, where the previous/next free blocks in the linked list can be found You can get fancy about this. Some freelists use a singly linked list that's kept sorted (so previous is always behind and next is always ahead in memory), and some use a single bit for allocated chunks to store whether the previous chunk is free or not, then store a footer at the end of free chunks which also contains the size, to save on memory wasted. But that's all garnish, the basics aren't that bad. Conclusion These are the basics of several memory allocator types. General purpose memory allocators will tend to be a mixture or composition of multiple types of allocators. Feel free to ask questions and share if you found this helpful, cause someone else may too. Additional reading: * Why std::allocator sucks and how to create a good allocator via composition: CppCon 2015: Andrei Alexandrescu “std::allocator...” [https://www.youtube.com/watch?v=LIb3L4vKZ7U] * How to design a freelist: GWU OS: memory allocation - malloc and free [https://www.youtube.com/watch?v=5zvu7vyypt0] by Gabriel Parmer [https://www.youtube.com/user/nothing6001] * Slab and buddy allocators: GWU OS: Memory Allocation - Slab and Buddy Allocators [https://www.youtube.com/watch?v=DRAHRJEAEso] by Gabriel Parmer [https://www.youtube.com/user/nothing6001] Edit: oh and read the comments, people are saying helpful stuff in there!
·cohost.org·
Basic memory allocators explained in 10 minutes
newca12/awesome-rust-formalized-reasoning: An exhaustive list of all Rust resources regarding automated or semi-automated formalization efforts in any area, constructive mathematics, formal algorithms, and program verification.
newca12/awesome-rust-formalized-reasoning: An exhaustive list of all Rust resources regarding automated or semi-automated formalization efforts in any area, constructive mathematics, formal algorithms, and program verification.
An exhaustive list of all Rust resources regarding automated or semi-automated formalization efforts in any area, constructive mathematics, formal algorithms, and program verification. - newca12/aw...
·github.com·
newca12/awesome-rust-formalized-reasoning: An exhaustive list of all Rust resources regarding automated or semi-automated formalization efforts in any area, constructive mathematics, formal algorithms, and program verification.
Post 1: Datalog, Chain-Forward Computation, and Relational Algebra
Post 1: Datalog, Chain-Forward Computation, and Relational Algebra
Our setting is logic programming, a field which attempts to design programming languages whose semantics have a close relationship to formal logic. The reason we might want to do this is that it suits our application domain more precisely than an implementation in a traditional programming language. Thus, using a logic programming language allows us to write more obviously-correct code, and perhaps even code that can be extracted cleanly from a certified implementation. Alternatively, if we did it ourselves, we’d have to do what our compiler (interpreter, …) would do anyway, so there’s no sense in doing it manually. Unfortunately, when we see a powerful tool, we are tempted to use it for everything: if our application is not ultimately-suited to the operationalization strategy of the logic programming engine we’re using, we simply obfuscate the issue in a veneer of formalism and end up with leaky abstractions. This is, I speculate, why logic programming languages have never caught on broadly for general-purpose programming. In this blog, I will detail the various trade-offs and implementation paradigms for modern logic programming engines, starting from Datalog and with a focus on program analysis.
·kmicinski.com·
Post 1: Datalog, Chain-Forward Computation, and Relational Algebra
Quickly identifying a sequence of digits in a string of characters
Quickly identifying a sequence of digits in a string of characters
Suppose that you want to quickly determine a sequence of eight characters are made of digits (e.g., ‘9434324134’). How fast can you go? In software, characters are mapped to integer values called the code points. The ASCII and UTF-8 code points for the digits 0, 1,…, 9 are the consecutive integers 0x30, 0x31, …, 0x39 … Continue reading Quickly identifying a sequence of digits in a string of characters
·lemire.me·
Quickly identifying a sequence of digits in a string of characters
How fast is bit packing?
How fast is bit packing?
Integer values are typically stored using 32 bits. Yet if you are given an array of integers between 0 and 131 072, you could store these numbers using as little as 17 bits each—a net saving of almost 50%. Programmers nearly never store integers in this manner despite the obvious compression benefits. Indeed, bit packing and … Continue reading How fast is bit packing?
·lemire.me·
How fast is bit packing?
5th HLF – Lecture: Leslie Lamport
5th HLF – Lecture: Leslie Lamport
Leslie Lamport: "How to Write a 21st Century Proof”Mathematicians have made a lot of progress in the last 350 years, but not in writing proofs. The proofs th...
·youtube.com·
5th HLF – Lecture: Leslie Lamport