Compare commits

..

No commits in common. "main" and "2025050500" have entirely different histories.

5 changed files with 31 additions and 34 deletions

View file

@ -11,9 +11,9 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
version: [14]
version: [12, 13, 14]
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
- name: Setting up gcc version
run: |
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-${{ matrix.version }} 100
@ -24,11 +24,11 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
version: [19, 20]
version: [14, 15, 16, 17, 18]
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
- name: Install dependencies
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends clang-19 clang-20
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends clang-14 clang-15
- name: Setting up clang version
run: |
sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-${{ matrix.version }} 100
@ -40,7 +40,7 @@ jobs:
container:
image: alpine:latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
- name: Install dependencies
run: apk update && apk add build-base python3
- name: Build
@ -48,7 +48,7 @@ jobs:
build-ubuntu-gcc-aarch64:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v4
- name: Install dependencies
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends gcc-aarch64-linux-gnu g++-aarch64-linux-gnu libgcc-s1-arm64-cross cpp-aarch64-linux-gnu
- name: Build

4
.gitignore vendored
View file

@ -1,2 +1,2 @@
/out/
/out-light/
out/
out-light/

View file

@ -1,4 +1,4 @@
# hardened_malloc
# Hardened malloc
* [Introduction](#introduction)
* [Dependencies](#dependencies)
@ -65,14 +65,14 @@ used instead as this allocator fundamentally doesn't support that environment.
## Dependencies
Debian stable (currently Debian 13) determines the most ancient set of
Debian stable (currently Debian 12) determines the most ancient set of
supported dependencies:
* glibc 2.41
* Linux 6.12
* Clang 19.1.7 or GCC 14.2.0
* glibc 2.36
* Linux 6.1
* Clang 14.0.6 or GCC 12.2.0
For Android, the Linux GKI 6.1, 6.6 and 6.12 branches are supported.
For Android, the Linux GKI 5.10, 5.15 and 6.1 branches are supported.
However, using more recent releases is highly recommended. Older versions of
the dependencies may be compatible at the moment but are not tested and will
@ -83,7 +83,7 @@ there will be custom integration offering better performance in the future
along with other hardening for the C standard library implementation.
For Android, only the current generation, actively developed maintenance branch of the Android
Open Source Project will be supported, which currently means `android16-qpr1-release`.
Open Source Project will be supported, which currently means `android15-release`.
## Testing
@ -159,17 +159,14 @@ line to the `/etc/ld.so.preload` configuration file:
The format of this configuration file is a whitespace-separated list, so it's
good practice to put each library on a separate line.
For maximum compatibility `libhardened_malloc.so` can be installed into
`/usr/lib/` to avoid preload failures caused by AppArmor profiles or systemd
ExecPaths= restrictions. Check for logs of the following format:
ERROR: ld.so: object '/usr/local/lib/libhardened_malloc.so' from /etc/ld.so.preload cannot be preloaded (failed to map segment from shared object): ignored.
On Debian systems `libhardened_malloc.so` should be installed into `/usr/lib/`
to avoid preload failures caused by AppArmor profile restrictions.
Using the `LD_PRELOAD` environment variable to load it on a case-by-case basis
will not work when `AT_SECURE` is set such as with setuid binaries. It's also
generally not a recommended approach for production usage. The recommendation
is to enable it globally and make exceptions for performance critical cases by
running the application in a container/namespace without it enabled.
running the application in a container / namespace without it enabled.
Make sure to raise `vm.max_map_count` substantially too to accommodate the very
large number of guard pages created by hardened\_malloc. As an example, in
@ -255,7 +252,7 @@ The following boolean configuration options are available:
* `CONFIG_WRITE_AFTER_FREE_CHECK`: `true` (default) or `false` to control
sanity checking that new small allocations contain zeroed memory. This can
detect writes caused by a write-after-free vulnerability and mixes well with
the features for making memory reuse randomized/delayed. This has a
the features for making memory reuse randomized / delayed. This has a
performance cost scaling to the size of the allocation, which is usually
acceptable. This is not relevant to large allocations because they're always
a fresh memory mapping from the kernel.
@ -341,7 +338,7 @@ larger caches can substantially improves performance).
## Core design
The core design of the allocator is very simple/minimalist. The allocator is
The core design of the allocator is very simple / minimalist. The allocator is
exclusive to 64-bit platforms in order to take full advantage of the abundant
address space without being constrained by needing to keep the design
compatible with 32-bit.
@ -373,13 +370,13 @@ whether it's free, along with a separate bitmap for tracking allocations in the
quarantine. The slab metadata entries in the array have intrusive lists
threaded through them to track partial slabs (partially filled, and these are
the first choice for allocation), empty slabs (limited amount of cached free
memory) and free slabs (purged/memory protected).
memory) and free slabs (purged / memory protected).
Large allocations are tracked via a global hash table mapping their address to
their size and random guard size. They're simply memory mappings and get mapped
on allocation and then unmapped on free. Large allocations are the only dynamic
memory mappings made by the allocator, since the address space for allocator
state (including both small/large allocation metadata) and slab allocations
state (including both small / large allocation metadata) and slab allocations
is statically reserved.
This allocator is aimed at production usage, not aiding with finding and fixing
@ -390,7 +387,7 @@ messages. The design choices are based around minimizing overhead and
maximizing security which often leads to different decisions than a tool
attempting to find bugs. For example, it uses zero-based sanitization on free
and doesn't minimize slack space from size class rounding between the end of an
allocation and the canary/guard region. Zero-based filling has the least
allocation and the canary / guard region. Zero-based filling has the least
chance of uncovering latent bugs, but also the best chance of mitigating
vulnerabilities. The canary feature is primarily meant to act as padding
absorbing small overflows to render them harmless, so slack space is helpful
@ -424,11 +421,11 @@ was a bit less important and if a core goal was finding latent bugs.
* Top-level isolated regions for each arena
* Divided up into isolated inner regions for each size class
* High entropy random base for each size class region
* No deterministic/low entropy offsets between allocations with
* No deterministic / low entropy offsets between allocations with
different size classes
* Metadata is completely outside the slab allocation region
* No references to metadata within the slab allocation region
* No deterministic/low entropy offsets to metadata
* No deterministic / low entropy offsets to metadata
* Entire slab region starts out non-readable and non-writable
* Slabs beyond the cache limit are purged and become non-readable and
non-writable memory again
@ -649,7 +646,7 @@ other. Static assignment can also reduce memory usage since threads may have
varying usage of size classes.
When there's substantial allocation or deallocation pressure, the allocator
does end up calling into the kernel to purge/protect unused slabs by
does end up calling into the kernel to purge / protect unused slabs by
replacing them with fresh `PROT_NONE` regions along with unprotecting slabs
when partially filled and cached empty slabs are depleted. There will be
configuration over the amount of cached empty slabs, but it's not entirely a
@ -696,7 +693,7 @@ The secondary benefit of thread caches is being able to avoid the underlying
allocator implementation entirely for some allocations and deallocations when
they're mixed together rather than many allocations being done together or many
frees being done together. The value of this depends a lot on the application
and it's entirely unsuitable/incompatible with a hardened allocator since it
and it's entirely unsuitable / incompatible with a hardened allocator since it
bypasses all of the underlying security and would destroy much of the security
value.
@ -960,7 +957,7 @@ doesn't handle large allocations within the arenas, so it presents those in the
For example, with 4 arenas enabled, there will be a 5th arena in the statistics
for the large allocations.
The `nmalloc`/`ndalloc` fields are 64-bit integers tracking allocation and
The `nmalloc` / `ndalloc` fields are 64-bit integers tracking allocation and
deallocation count. These are defined as wrapping on overflow, per the jemalloc
implementation.

View file

@ -44,7 +44,7 @@ void *set_pointer_tag(void *ptr, u8 tag) {
return (void *) (((uintptr_t) tag << 56) | (uintptr_t) untag_pointer(ptr));
}
// This test checks that slab slot allocation uses tag that is distinct from tags of its neighbors
// This test checks that slab slot allocation uses tag that is distint from tags of its neighbors
// and from the tag of the previous allocation that used the same slot
void tag_distinctness() {
// tag 0 is reserved

View file

@ -98,7 +98,7 @@ class TestSimpleMemoryCorruption(unittest.TestCase):
self.assertEqual(stderr.decode("utf-8"),
"fatal allocator error: invalid free\n")
def test_invalid_malloc_usable_size_small_quarantine(self):
def test_invalid_malloc_usable_size_small_quarantene(self):
_stdout, stderr, returncode = self.run_test(
"invalid_malloc_usable_size_small_quarantine")
self.assertEqual(returncode, -6)