Daniel Micay
b84af9b499
add wrapper for madvise
2021-03-22 12:24:26 -04:00
Daniel Micay
e77ffa76d9
add initial malloc_trim slab quarantine purging
...
This currently only purges the quarantines for extended size classes.
2021-03-22 11:16:57 -04:00
Daniel Micay
86b0b3e452
fix !CONFIG_EXTENDED_SIZE_CLASSES configuration
2021-03-21 18:09:02 -04:00
Daniel Micay
a3b4c163eb
drop unused header
2021-03-05 00:35:10 -05:00
Daniel Micay
ddd14bc421
avoid type comparison warning on some platforms
2021-02-16 17:18:35 -05:00
Daniel Micay
29b09648d6
avoid undefined clz and shift in edge cases
...
This is triggered when get_large_size_class is called with a size in the
range [1,4]. This can occur with aligned_alloc(8192, size). In practice,
it doesn't appear to cause any harm, but we shouldn't have any undefined
behavior for well-defined usage of the API. It also occurs if the caller
passes a pointer outside the slab region to free_sized but the expected
size is in the range [1,4]. That usage of free_sized is already going to
be considered undefined, but we should avoid undefined behavior in the
caller from triggering more undefined behavior when it's avoidable.
2021-02-16 08:31:17 -05:00
Thibaut Sautereau
1984cb3b3d
malloc_object_size: avoid fault for invalid region
...
It's the region pointer that can be NULL here, and p was checked at the
beginning of the function.
2021-02-10 17:43:36 -05:00
Thibaut Sautereau
76860c72e1
malloc_usable_size: clean abort on invalid region
...
It's the region pointer that can be NULL here, and p was checked at the
beginning of the function. Also fix the test accordingly.
2021-02-10 17:41:17 -05:00
Daniel Micay
5275563252
fix C++ sized deallocation check false positive
...
This is a compatibility issue triggered when both slab canaries and the
C++ allocator overloads providing sized deallocation checks are enabled.
The boundary where slab allocations are turned into large allocations
due to not having room for the canary in the largest slab allocation
size class triggers a false positive in the sized deallocation check.
2021-01-06 00:18:59 -05:00
Daniel Micay
b90f650153
fix sized deallocation check with large sizes
...
The CONFIG_CXX_ALLOCATOR feature enables sanity checks for sized
deallocation and this wasn't updated to handle the introduction of
performing size class rounding for large sizes.
2020-11-10 13:53:32 -05:00
Daniel Micay
b072022022
perform init sanity checks before MPK unsealing
2020-10-06 17:34:35 -04:00
Daniel Micay
2bb1c39d31
add MPK support for stats retrieval functions
2020-10-06 17:32:25 -04:00
Daniel Micay
0bf18b7c26
optimize malloc_usable_size enforce_init
2020-10-03 15:10:49 -04:00
Daniel Micay
178d4f320f
harden checks for uninitialized usage
2020-10-02 15:06:29 -04:00
Daniel Micay
483b1d7b8b
empty malloc_info output when stats are disabled
2020-09-17 17:42:18 -04:00
Daniel Micay
96eca21ac5
remove thread_local macro workaround glibc < 2.28
2020-09-17 17:38:40 -04:00
Daniel Micay
b4bbd09f07
change label for quarantined large allocations
2020-09-17 16:56:01 -04:00
Daniel Micay
a88305c01b
support disabling region quarantine
2020-09-17 16:53:34 -04:00
Daniel Micay
85c5c3736c
add stats tracking to special large realloc paths
2020-09-17 16:29:13 -04:00
Daniel Micay
96a9bcf3a1
move deprecated glibc extensions to the bottom
2020-09-17 16:20:05 -04:00
Daniel Micay
41fb89517a
simplify malloc_info code
2020-09-17 16:10:02 -04:00
Daniel Micay
50e0f1334c
add is_init check to malloc_info
2020-09-17 16:07:10 -04:00
Daniel Micay
9fb2791af2
add is_init check to h_mallinfo_arena_info
2020-09-17 16:00:03 -04:00
anupritaisno1
8974af86d1
hardened malloc: iterate -> malloc_iterate
...
Signed-off-by: anupritaisno1 <www.anuprita804@gmail.com>
2020-09-15 00:37:23 -04:00
Daniel Micay
dd7291ebfe
better wording for page size mismatch error
2020-08-05 18:10:53 -04:00
Daniel Micay
bcb93cab63
avoid an ifdef
2020-08-04 17:22:03 -04:00
rwarr627
f214bd541a
added check for if small allocations are free
2020-06-17 23:29:30 -04:00
Daniel Micay
722974f4e9
remove trailing whitespace
2020-06-13 09:59:50 -04:00
rwarr627
577524798e
calculates offset from start for small allocations
2020-06-13 01:27:32 -04:00
Daniel Micay
467ba8440f
add comment explaining slab cache size
2020-05-24 09:36:43 -04:00
Daniel Micay
067b3c864f
set slab cache sizes based on the largest slab
2020-05-24 09:31:02 -04:00
Daniel Micay
4a6bbe445c
limit cached slabs based on max size class
2020-05-13 01:05:37 -04:00
Daniel Micay
b672316bc7
use const for memory_corruption_check_small
...
This currently causes a warning (treated as an error) on Android where
malloc_usable_size uses a const pointer.
2020-04-30 16:06:32 -04:00
Daniel Micay
029a2edf28
remove trailing whitespace
2020-04-30 16:03:45 -04:00
rwarr627
35bd7cd76d
added memory corruption checking to malloc_usable_size for slab allocations
2020-04-29 18:06:15 -04:00
Daniel Micay
19365c25d6
remove workaround for Linux kernel MPK fork bug
2020-04-24 02:51:39 -04:00
Daniel Micay
0436227092
no longer need glibc pthread_atfork workaround
2020-03-29 11:40:12 -04:00
Daniel Micay
449962e044
disable obsolete glibc extensions elsewhere
2020-02-03 08:39:19 -05:00
Daniel Micay
a28da3c65a
use prefix for extended mallinfo functions
2019-09-07 18:33:24 -04:00
Daniel Micay
d37657e125
enable llvm-include-order tidy check
2019-08-18 02:39:55 -04:00
Daniel Micay
d80919fa1e
substantially raise the arbitrary arena limit
2019-07-12 03:43:33 -04:00
Daniel Micay
410e9efb93
extend configuration sanity checks
2019-07-11 17:09:48 -04:00
Daniel Micay
a32e26b8e9
avoid trying to use mremap outside of Linux
2019-07-05 21:59:44 -04:00
Daniel Micay
bc75c4db7b
realloc: use copy_size to check for canaries
...
This avoids unnecessarily copying the canary when doing a realloc from a
small size to a large size. It also avoids trying to copy a non-existent
canary out of a zero-size allocation, which are memory protected.
2019-06-17 00:28:10 -04:00
Daniel Micay
12525f2861
work around old glibc releases without threads.h
2019-06-06 08:10:57 -04:00
Daniel Micay
409a639312
provide working malloc_info outside Android too
2019-04-19 16:56:07 -04:00
Daniel Micay
494436c904
implement options handling for malloc_info
2019-04-19 16:23:14 -04:00
Daniel Micay
a13db3fc68
initialize size class CSPRNGs from init CSPRNG
...
This avoids making a huge number of getrandom system calls during
initialization. The init CSPRNG is unmapped before initialization
finishes and these are still reseeded from the OS. The purpose of the
independent CSPRNGs is simply to avoid the massive performance hit of
synchronization and there's no harm in doing it this way.
Keeping around the init CSPRNG and reseeding from it would defeat the
purpose of reseeding, and it isn't a measurable performance issue since
it can just be tuned to reseed less often.
2019-04-15 06:50:24 -04:00
Daniel Micay
f115be8392
shrink initial region table size to fit in 1 page
2019-04-15 00:04:00 -04:00
Daniel Micay
e7eeb3f35c
avoid reading thread_local more than once
2019-04-14 20:26:14 -04:00
Daniel Micay
7e465c621e
use allocate_large directly in large remap path
2019-04-14 19:46:22 -04:00
Daniel Micay
1c899657c1
add is_init check to mallinfo functions
2019-04-14 19:12:38 -04:00
Daniel Micay
8774065b13
fix non-init size for malloc_object_size extension
2019-04-14 19:01:25 -04:00
Daniel Micay
84a25ec83e
fix build with CONFIG_STATS enabled
2019-04-11 00:51:34 -04:00
Daniel Micay
d4b8fee1c4
allow using the largest slab allocation size
2019-04-10 16:54:58 -04:00
Daniel Micay
086eb1fee4
at a final spacing class of 1 slot size classes
2019-04-10 16:32:24 -04:00
Daniel Micay
7a89a7b8c5
support for slabs with 1 slot for largest sizes
2019-04-10 16:26:49 -04:00
Daniel Micay
6c31f6710a
support extended range of small size classes
2019-04-10 08:31:51 -04:00
Daniel Micay
d5f18c47b3
micro-optimize initialization with arenas
2019-04-10 08:07:24 -04:00
Daniel Micay
62c73d8b41
harden thread_arena check
2019-04-10 07:40:29 -04:00
Daniel Micay
d5c00b4d0d
disable current in-place growth code path for now
2019-04-09 19:20:34 -04:00
Daniel Micay
d5c1bca915
use round-robin assignment to arenas
...
The initial implementation was a temporary hack rather than a serious
implementation of random arena selection. It may still make sense to
offer it but it should be implemented via the CSPRNG instead of this
silly hack. It would also make sense to offer dynamic load balancing,
particularly with sched_getcpu().
This results in a much more predictable spread across arenas. This is
one place where randomization probably isn't a great idea because it
makes the benefits of arenas unpredictable in programs not creating a
massive number of threads. The security benefits of randomization for
this are also quite small. It's not certain that randomization is even a
net win for security since it's not random enough and can result in a
more interesting mix of threads in the same arena for an attacker if
they're able to attempt multiple attacks.
2019-04-09 16:54:14 -04:00
Daniel Micay
9a0de626fc
move stats accounting to utility functions
2019-04-09 03:57:44 -04:00
Daniel Micay
9453332e57
remove redundant else block
2019-04-09 00:06:17 -04:00
Daniel Micay
a4cff7a960
factor out slab memory_set_name into label_slab
2019-04-07 18:02:56 -04:00
Daniel Micay
ef90f404a6
add sanity check for stats option
2019-04-07 09:06:03 -04:00
Daniel Micay
e0891c8cfc
implement the option of large size classes
...
This extends the size class scheme used for slab allocations to large
allocations. This drastically improves performance for many real world
programs using incremental realloc growth instead of using proper growth
factors. There are 4 size classes for every doubling in size, resulting
in a worst case of ~20% extra virtual memory being reserved and a huge
increase in performance for pathological cases. For example, growing
from 4MiB to 8MiB by calling realloc in increments of 32 bytes will only
need to do work beyond looking up the size 4 times instead of 1024 times
with 4096 byte granularity.
2019-04-07 08:52:17 -04:00
Daniel Micay
c68de6141d
factor out duplicated code in malloc/realloc
2019-04-07 05:48:10 -04:00
Daniel Micay
ce36d0c826
split out allocate_large function
2019-04-07 05:44:09 -04:00
Daniel Micay
3d18fb8074
implement Android M_PURGE mallopt via malloc_trim
2019-04-07 03:35:26 -04:00
Daniel Micay
4f08e40fe5
move thread sealing implementation
2019-04-07 00:50:26 -04:00
Daniel Micay
55891357ff
clean up the exported API section of the code
2019-04-07 00:36:53 -04:00
Daniel Micay
491ce6b0b1
no need to provide valloc and pvalloc on Android
2019-04-07 00:31:09 -04:00
Daniel Micay
1eed432b9a
limit more glibc cruft to that environment
2019-04-07 00:30:05 -04:00
Daniel Micay
27a4c883ce
extend stats with nmalloc and ndalloc
2019-04-06 23:19:03 -04:00
Daniel Micay
e94fe50a0d
include zero byte size class in stats
...
The allocations don't consume any actual memory, but it does still use
up the virtual memory assigned to the size class and requires metadata.
2019-04-06 22:43:56 -04:00
Daniel Micay
712748aaa8
add implementation of Android mallinfo extensions
...
These are used internally by Bionic to implement malloc_info.
2019-04-06 22:39:01 -04:00
Daniel Micay
0f107cd2a3
only provide malloc_info stub for glibc
...
This has a proper implementation in Bionic outside of the malloc
implementation via the extended mallinfo API.
2019-04-06 22:01:12 -04:00
Daniel Micay
350d0e5fd2
add real mallinfo implementation for Android
...
Android Q uses the mallinfo implementation in the ART GC:
c220f98180
1575267302
2019-04-06 20:54:26 -04:00
Daniel Micay
df9650fe64
conditionally include threads.h
2019-03-26 01:28:27 -04:00
Daniel Micay
98deb9de52
relabel malloc read-only after init data
2019-03-25 20:34:10 -04:00
Daniel Micay
fc8f2c3b60
move pthread_atfork wrapper to util header
2019-03-25 17:16:52 -04:00
Daniel Micay
b5187a0aff
only use __register_atfork hack for old glibc
2019-03-25 17:16:22 -04:00
Daniel Micay
c5e911419d
add initial implementation of arenas
2019-03-25 14:59:50 -04:00
Daniel Micay
55769496dc
move hash_page to pages.h
2019-03-25 14:54:22 -04:00
Daniel Micay
13de480bde
rename quarantine bitmap field for clarity
2019-03-24 20:24:40 -04:00
Daniel Micay
3d142eb4c2
relabel large allocation guards when shrinking
2019-03-23 23:01:12 -04:00
Daniel Micay
64dfd23f7b
relabel purged slabs
2019-03-23 22:59:59 -04:00
Daniel Micay
178ec6e3f9
relabel quarantined large allocation regions
2019-03-23 22:57:19 -04:00
Daniel Micay
6e67106882
label malloc slab region gaps
2019-03-23 22:54:56 -04:00
Daniel Micay
1d62075291
label allocate_aligned_pages mappings
2019-03-23 22:29:04 -04:00
Daniel Micay
45337ebe07
label allocate_pages mappings
2019-03-22 23:17:38 -04:00
Daniel Micay
65311a5df2
relabel region table mapping
2019-03-22 21:59:44 -04:00
Daniel Micay
4a000d96e2
pkey state is now preserved on fork for Linux 5.0+
...
This patch is going to be backported to stable kernels, so the check
could be expanded to allow recent enough stable kernel branches.
2019-03-20 11:05:31 -04:00
Daniel Micay
c9df70d934
add support for labelling memory regions
2019-02-13 13:34:33 -05:00
Daniel Micay
2d7882ec0e
remove redundant unseal / seal metadata
2019-01-08 17:01:56 -05:00
Daniel Micay
fa17f70a73
add more configuration sanity checks
2019-01-06 00:52:25 -05:00
Daniel Micay
57f115b33c
scale slab quarantine based on size
2019-01-02 14:52:13 -05:00
Daniel Micay
ccc2a86501
rename quarantine size -> length for clarity
2019-01-02 14:17:02 -05:00
Daniel Micay
bc2cb5c828
fix builds with both random and queue quarantine
2019-01-02 13:23:49 -05:00