mte: use tag 0 for freed slots, stop reserving tag 15

pull/242/head 2024020500-redfin
Dmitry Muhomor 2024-01-23 19:50:26 +02:00 committed by Daniel Micay
parent 3c1f40aff0
commit 7268189933
4 changed files with 17 additions and 19 deletions

View File

@ -724,15 +724,15 @@ freeing as there would be if the kernel supported these features directly.
## Memory tagging
Random tags are set for all slab allocations when allocated, with 5 excluded values:
Random tags are set for all slab allocations when allocated, with 4 excluded values:
1. the default `0` tag
2. a statically *reserved free tag*
3. the previous tag used for the slot
4. the current (or previous) tag used for the slot to the left
5. the current (or previous) tag used for the slot to the right
1. the reserved `0` tag
2. the previous tag used for the slot
3. the current (or previous) tag used for the slot to the left
4. the current (or previous) tag used for the slot to the right
When a slab allocation is freed, the *reserved free tag* is set for the slot.
When a slab allocation is freed, the reserved `0` tag is set for the slot.
Slab allocation slots are cleared before reuse when memory tagging is enabled.
This ensures the following properties:
@ -740,10 +740,8 @@ This ensures the following properties:
- Use-after-free are deterministically detected until the freed slot goes through
both the random and FIFO quarantines, gets allocated again, goes through both
quarantines again and then finally gets allocated again for a 2nd time.
Since the default `0` tag isn't used, untagged memory can't access malloc allocations
and vice versa, although it may make sense to reuse the default tag for free
data to avoid reducing the possible random tags from 15 to 14, since freed
data is always zeroed anyway.
- Since the default `0` tag is reserved, untagged pointers can't access slab
allocations and vice versa.
Slab allocations are done in a statically reserved region for each size class
and all metadata is in a statically reserved region, so interactions between

View File

@ -47,9 +47,9 @@ void *set_pointer_tag(void *ptr, u8 tag) {
// This test checks that slab slot allocation uses tag that is distint from tags of its neighbors
// and from the tag of the previous allocation that used the same slot
void tag_distinctness() {
// 0 and 15 are reserved
// tag 0 is reserved
const int min_tag = 1;
const int max_tag = 14;
const int max_tag = 0xf;
struct SizeClass {
int size;
@ -148,8 +148,8 @@ void tag_distinctness() {
}
}
// check that all of the tags were used, except reserved ones
assert(seen_tags == (0xffff & ~(1 << 0 | 1 << 15)));
// check that all of the tags were used, except for the reserved tag 0
assert(seen_tags == (0xffff & ~(1 << 0)));
printf("size_class\t%i\t" "tdc_left %i\t" "tdc_right %i\t" "tdc_prev_alloc %i\n",
sc.size, left_neighbor_tdc_cnt, right_neighbor_tdc_cnt, prev_alloc_tdc_cnt);

View File

@ -574,9 +574,8 @@ static void *tag_and_clear_slab_slot(struct slab_metadata *metadata, void *slot_
// is constructed.
u8 *slot_tags = metadata->arm_mte_tags;
// Tag exclusion mask. 0 tag is always excluded to detect accesses to slab memory via untagged
// pointers. Moreover, 0 tag is excluded in bionic via PR_MTE_TAG_MASK prctl
u64 tem = (1 << 0) | (1 << RESERVED_TAG);
// tag exclusion mask
u64 tem = (1 << RESERVED_TAG);
// current or previous tag of left neighbor or 0 if there's no left neighbor or if it was never used
tem |= (1 << u4_arr_get(slot_tags, slot_idx));

View File

@ -6,7 +6,8 @@
#ifdef HAS_ARM_MTE
#include "arm_mte.h"
#define MEMTAG 1
#define RESERVED_TAG 15
// Note that bionic libc always reserves tag 0 via PR_MTE_TAG_MASK prctl
#define RESERVED_TAG 0
#define TAG_WIDTH 4
#endif