From 995d0580d1f923615623092276fe05ca4cee9308 Mon Sep 17 00:00:00 2001 From: Daniel Micay Date: Sun, 18 Aug 2019 01:39:22 -0400 Subject: [PATCH] remove extra spaces inserted by vim joinspaces --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b408c0d..dfc0151 100644 --- a/README.md +++ b/README.md @@ -46,7 +46,7 @@ differences in the design are that it is solely focused on hardening rather than finding bugs, uses finer-grained size classes along with slab sizes going beyond 4k to reduce internal fragmentation, doesn't rely on the kernel having fine-grained mmap randomization and only targets 64-bit to make aggressive use -of the large address space. There are lots of smaller differences in the +of the large address space. There are lots of smaller differences in the implementation approach. It incorporates the previous extensions made to OpenBSD malloc including adding padding to allocations for canaries (distinct from the current OpenBSD malloc canaries), write-after-free detection tied to @@ -539,7 +539,7 @@ to finding the per-size-class metadata. The part that's still open to different design choices is how arenas are assigned to threads. One approach is statically assigning arenas via round-robin like the standard jemalloc implementation, or statically assigning to a random arena which is essentially -the current implementation. Another option is dynamic load balancing via a +the current implementation. Another option is dynamic load balancing via a heuristic like `sched_getcpu` for per-CPU arenas, which would offer better performance than randomly choosing an arena each time while being more predictable for an attacker. There are actually some security benefits from @@ -550,7 +550,7 @@ varying usage of size classes. When there's substantial allocation or deallocation pressure, the allocator does end up calling into the kernel to purge / protect unused slabs by replacing them with fresh `PROT_NONE` regions along with unprotecting slabs -when partially filled and cached empty slabs are depleted. There will be +when partially filled and cached empty slabs are depleted. There will be configuration over the amount of cached empty slabs, but it's not entirely a performance vs. memory trade-off since memory protecting unused slabs is a nice opportunistic boost to security. However, it's not really part of the core