Compare commits

..

17 commits
14 ... main

Author SHA1 Message Date
Ram Karthikeya Musti
09b4f9ab1f Removing "[future]" tag in documentation
ARM MTE support has already been added to hardened_malloc.
2026-03-02 11:44:26 -05:00
Daniel Micay
3bee8d3e0e fix realloc from small sized allocations with above PAGE_SIZE alignment
Large allocations don't always have a size larger than the maximum slab
size class because alignment larger than PAGE_SIZE is handled via large
allocations. The general case in realloc was assuming small sizes imply
slab allocations which isn't guaranteed.

Alignment above PAGE_SIZE is rare and realloc doesn't preserve alignment
so passing aligned allocations to realloc is also rare. In practice, it
ends up doing invalid accesses within the reserved metadata region which
will almost always crash due to it being largely PROT_NONE memory and it
having an extremely high likelihood of indexing into the PROT_NONE areas
rather than the actual metadata. That means if this impacted an app, it
would currently be crashing in practice. Due to the reserved region for
metadata and the fact that it would be crashing, this can be ruled out
as a security concern but is potentially an extremely rare compatibility
issue if there's any code using this.

Reported-by: Stefan Rus <stefan@photonspark.com>
2026-02-22 14:58:24 -05:00
Daniel Micay
1044b541a9 update libdivide to 5.3.0 2026-02-16 11:30:28 -05:00
bravesasha
d4e40af550 Update LICENSE 2026-01-07 03:07:41 -05:00
qikp0
bb9187b94c Android 16 QPR2 is now the active branch of AOSP 2026-01-03 14:47:39 -05:00
Ganwtrs
261b7bbf09 Correct title of README from Hardened malloc to hardened_malloc 2025-12-06 00:40:28 -05:00
Ganwtrs
74ef8a96ed Remove spaces around the slash (like one/two) 2025-12-05 21:55:56 -05:00
dependabot[bot]
c110ba88f3 build(deps): bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-20 13:27:29 -05:00
charles25565
a000fd4b5e Bump minimum AOSP version to QPR1 2025-11-15 17:04:35 -05:00
Charles
5cb0ff9f4d gitignore: use exact matches 2025-10-29 16:26:38 -04:00
Daniel Micay
e371736b17 drop legacy compiler versions from GitHub workflow 2025-09-23 18:12:57 -04:00
Daniel Micay
c46d3cab33 add newer Clang versions for GitHub workflow 2025-09-23 18:12:39 -04:00
Christian Göttsche
33ed3027ab Fix two typos 2025-09-21 12:35:28 -04:00
Christian Göttsche
86dde60fcf ReadMe: adjust section about library location 2025-09-21 12:35:28 -04:00
charles25565
ff99511eb4 Update dependencies in README
Update from bookworm to trixie, updating GKIs, and changing to Android 16.
2025-09-17 11:03:53 -04:00
Daniel Micay
c392d40843 update GitHub actions/checkout to 5 2025-08-12 00:28:58 -04:00
Віктор Дуйко
7481c8857f docs: updated the license date 2025-04-05 13:13:18 -04:00
8 changed files with 201 additions and 164 deletions

View file

@ -11,9 +11,9 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
version: [12, 13, 14]
version: [14]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Setting up gcc version
run: |
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-${{ matrix.version }} 100
@ -24,11 +24,11 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
version: [14, 15, 16, 17, 18]
version: [19, 20]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Install dependencies
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends clang-14 clang-15
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends clang-19 clang-20
- name: Setting up clang version
run: |
sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-${{ matrix.version }} 100
@ -40,7 +40,7 @@ jobs:
container:
image: alpine:latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Install dependencies
run: apk update && apk add build-base python3
- name: Build
@ -48,7 +48,7 @@ jobs:
build-ubuntu-gcc-aarch64:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- name: Install dependencies
run: sudo apt-get update && sudo apt-get install -y --no-install-recommends gcc-aarch64-linux-gnu g++-aarch64-linux-gnu libgcc-s1-arm64-cross cpp-aarch64-linux-gnu
- name: Build

4
.gitignore vendored
View file

@ -1,2 +1,2 @@
out/
out-light/
/out/
/out-light/

View file

@ -1,4 +1,4 @@
Copyright © 2018-2024 GrapheneOS
Copyright © 2018-2026 GrapheneOS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View file

@ -1,4 +1,4 @@
# Hardened malloc
# hardened_malloc
* [Introduction](#introduction)
* [Dependencies](#dependencies)
@ -65,14 +65,14 @@ used instead as this allocator fundamentally doesn't support that environment.
## Dependencies
Debian stable (currently Debian 12) determines the most ancient set of
Debian stable (currently Debian 13) determines the most ancient set of
supported dependencies:
* glibc 2.36
* Linux 6.1
* Clang 14.0.6 or GCC 12.2.0
* glibc 2.41
* Linux 6.12
* Clang 19.1.7 or GCC 14.2.0
For Android, the Linux GKI 5.10, 5.15 and 6.1 branches are supported.
For Android, the Linux GKI 6.1, 6.6 and 6.12 branches are supported.
However, using more recent releases is highly recommended. Older versions of
the dependencies may be compatible at the moment but are not tested and will
@ -83,7 +83,7 @@ there will be custom integration offering better performance in the future
along with other hardening for the C standard library implementation.
For Android, only the current generation, actively developed maintenance branch of the Android
Open Source Project will be supported, which currently means `android15-release`.
Open Source Project will be supported, which currently means `android16-qpr2-release`.
## Testing
@ -159,14 +159,17 @@ line to the `/etc/ld.so.preload` configuration file:
The format of this configuration file is a whitespace-separated list, so it's
good practice to put each library on a separate line.
On Debian systems `libhardened_malloc.so` should be installed into `/usr/lib/`
to avoid preload failures caused by AppArmor profile restrictions.
For maximum compatibility `libhardened_malloc.so` can be installed into
`/usr/lib/` to avoid preload failures caused by AppArmor profiles or systemd
ExecPaths= restrictions. Check for logs of the following format:
ERROR: ld.so: object '/usr/local/lib/libhardened_malloc.so' from /etc/ld.so.preload cannot be preloaded (failed to map segment from shared object): ignored.
Using the `LD_PRELOAD` environment variable to load it on a case-by-case basis
will not work when `AT_SECURE` is set such as with setuid binaries. It's also
generally not a recommended approach for production usage. The recommendation
is to enable it globally and make exceptions for performance critical cases by
running the application in a container / namespace without it enabled.
running the application in a container/namespace without it enabled.
Make sure to raise `vm.max_map_count` substantially too to accommodate the very
large number of guard pages created by hardened\_malloc. As an example, in
@ -252,7 +255,7 @@ The following boolean configuration options are available:
* `CONFIG_WRITE_AFTER_FREE_CHECK`: `true` (default) or `false` to control
sanity checking that new small allocations contain zeroed memory. This can
detect writes caused by a write-after-free vulnerability and mixes well with
the features for making memory reuse randomized / delayed. This has a
the features for making memory reuse randomized/delayed. This has a
performance cost scaling to the size of the allocation, which is usually
acceptable. This is not relevant to large allocations because they're always
a fresh memory mapping from the kernel.
@ -338,7 +341,7 @@ larger caches can substantially improves performance).
## Core design
The core design of the allocator is very simple / minimalist. The allocator is
The core design of the allocator is very simple/minimalist. The allocator is
exclusive to 64-bit platforms in order to take full advantage of the abundant
address space without being constrained by needing to keep the design
compatible with 32-bit.
@ -370,13 +373,13 @@ whether it's free, along with a separate bitmap for tracking allocations in the
quarantine. The slab metadata entries in the array have intrusive lists
threaded through them to track partial slabs (partially filled, and these are
the first choice for allocation), empty slabs (limited amount of cached free
memory) and free slabs (purged / memory protected).
memory) and free slabs (purged/memory protected).
Large allocations are tracked via a global hash table mapping their address to
their size and random guard size. They're simply memory mappings and get mapped
on allocation and then unmapped on free. Large allocations are the only dynamic
memory mappings made by the allocator, since the address space for allocator
state (including both small / large allocation metadata) and slab allocations
state (including both small/large allocation metadata) and slab allocations
is statically reserved.
This allocator is aimed at production usage, not aiding with finding and fixing
@ -387,7 +390,7 @@ messages. The design choices are based around minimizing overhead and
maximizing security which often leads to different decisions than a tool
attempting to find bugs. For example, it uses zero-based sanitization on free
and doesn't minimize slack space from size class rounding between the end of an
allocation and the canary / guard region. Zero-based filling has the least
allocation and the canary/guard region. Zero-based filling has the least
chance of uncovering latent bugs, but also the best chance of mitigating
vulnerabilities. The canary feature is primarily meant to act as padding
absorbing small overflows to render them harmless, so slack space is helpful
@ -411,7 +414,7 @@ was a bit less important and if a core goal was finding latent bugs.
randomly sized guard regions around it
* Protection via Memory Protection Keys (MPK) on x86\_64 (disabled by
default due to low benefit-cost ratio on top of baseline protections)
* [future] Protection via MTE on ARMv8.5+
* Protection via MTE on ARMv8.5+
* Deterministic detection of any invalid free (unallocated, unaligned, etc.)
* Validation of the size passed for C++14 sized deallocation by `delete`
even for code compiled with earlier standards (detects type confusion if
@ -421,11 +424,11 @@ was a bit less important and if a core goal was finding latent bugs.
* Top-level isolated regions for each arena
* Divided up into isolated inner regions for each size class
* High entropy random base for each size class region
* No deterministic / low entropy offsets between allocations with
* No deterministic/low entropy offsets between allocations with
different size classes
* Metadata is completely outside the slab allocation region
* No references to metadata within the slab allocation region
* No deterministic / low entropy offsets to metadata
* No deterministic/low entropy offsets to metadata
* Entire slab region starts out non-readable and non-writable
* Slabs beyond the cache limit are purged and become non-readable and
non-writable memory again
@ -646,7 +649,7 @@ other. Static assignment can also reduce memory usage since threads may have
varying usage of size classes.
When there's substantial allocation or deallocation pressure, the allocator
does end up calling into the kernel to purge / protect unused slabs by
does end up calling into the kernel to purge/protect unused slabs by
replacing them with fresh `PROT_NONE` regions along with unprotecting slabs
when partially filled and cached empty slabs are depleted. There will be
configuration over the amount of cached empty slabs, but it's not entirely a
@ -693,7 +696,7 @@ The secondary benefit of thread caches is being able to avoid the underlying
allocator implementation entirely for some allocations and deallocations when
they're mixed together rather than many allocations being done together or many
frees being done together. The value of this depends a lot on the application
and it's entirely unsuitable / incompatible with a hardened allocator since it
and it's entirely unsuitable/incompatible with a hardened allocator since it
bypasses all of the underlying security and would destroy much of the security
value.
@ -957,7 +960,7 @@ doesn't handle large allocations within the arenas, so it presents those in the
For example, with 4 arenas enabled, there will be a 5th arena in the statistics
for the large allocations.
The `nmalloc` / `ndalloc` fields are 64-bit integers tracking allocation and
The `nmalloc`/`ndalloc` fields are 64-bit integers tracking allocation and
deallocation count. These are defined as wrapping on overflow, per the jemalloc
implementation.

View file

@ -44,7 +44,7 @@ void *set_pointer_tag(void *ptr, u8 tag) {
return (void *) (((uintptr_t) tag << 56) | (uintptr_t) untag_pointer(ptr));
}
// This test checks that slab slot allocation uses tag that is distint from tags of its neighbors
// This test checks that slab slot allocation uses tag that is distinct from tags of its neighbors
// and from the tag of the previous allocation that used the same slot
void tag_distinctness() {
// tag 0 is reserved

View file

@ -1530,7 +1530,8 @@ EXPORT void *h_realloc(void *old, size_t size) {
old = untag_pointer(old);
size_t old_size;
if (old < get_slab_region_end() && old >= ro.slab_region_start) {
bool old_in_slab_region = old < get_slab_region_end() && old >= ro.slab_region_start;
if (old_in_slab_region) {
old_size = slab_usable_size(old);
if (size <= max_slab_size_class && get_size_info(size).size == old_size) {
return old_orig;
@ -1647,7 +1648,7 @@ EXPORT void *h_realloc(void *old, size_t size) {
copy_size -= canary_size;
}
memcpy(new, old_orig, copy_size);
if (old_size <= max_slab_size_class) {
if (old_in_slab_region) {
deallocate_small(old, NULL);
} else {
deallocate_large(old, NULL);

View file

@ -98,7 +98,7 @@ class TestSimpleMemoryCorruption(unittest.TestCase):
self.assertEqual(stderr.decode("utf-8"),
"fatal allocator error: invalid free\n")
def test_invalid_malloc_usable_size_small_quarantene(self):
def test_invalid_malloc_usable_size_small_quarantine(self):
_stdout, stderr, returncode = self.run_test(
"invalid_malloc_usable_size_small_quarantine")
self.assertEqual(returncode, -6)

View file

@ -2,7 +2,7 @@
// https://libdivide.com
//
// Copyright (C) 2010 - 2022 ridiculous_fish, <libdivide@ridiculousfish.com>
// Copyright (C) 2016 - 2022 Kim Walisch, <kim.walisch@gmail.com>
// Copyright (C) 2016 - 2026 Kim Walisch, <kim.walisch@gmail.com>
//
// libdivide is dual-licensed under the Boost or zlib licenses.
// You may use libdivide under the terms of either of these.
@ -12,18 +12,26 @@
#define LIBDIVIDE_H
// *** Version numbers are auto generated - do not edit ***
#define LIBDIVIDE_VERSION "5.2.0"
#define LIBDIVIDE_VERSION "5.3.0"
#define LIBDIVIDE_VERSION_MAJOR 5
#define LIBDIVIDE_VERSION_MINOR 2
#define LIBDIVIDE_VERSION_MINOR 3
#define LIBDIVIDE_VERSION_PATCH 0
#include <stdint.h>
#if !defined(__AVR__)
#if !defined(__AVR__) && __STDC_HOSTED__ != 0
#include <stdio.h>
#include <stdlib.h>
#endif
#if defined(_MSC_VER) && (defined(__cplusplus) && (__cplusplus >= 202002L)) || \
(defined(_MSVC_LANG) && (_MSVC_LANG >= 202002L))
#if __has_include(<bit>)
#include <bit>
#define LIBDIVIDE_VC_CXX20
#endif
#endif
#if defined(LIBDIVIDE_SSE2)
#include <emmintrin.h>
#endif
@ -37,23 +45,23 @@
#endif
// Clang-cl prior to Visual Studio 2022 doesn't include __umulh/__mulh intrinsics
#if defined(_MSC_VER) && defined(LIBDIVIDE_X86_64) && (!defined(__clang__) || _MSC_VER>1930)
#define LIBDIVIDE_X64_INTRINSICS
#if defined(_MSC_VER) && (!defined(__clang__) || _MSC_VER > 1930) && \
(defined(_M_X64) || defined(_M_ARM64) || defined(_M_HYBRID_X86_ARM64) || defined(_M_ARM64EC))
#define LIBDIVIDE_MULH_INTRINSICS
#endif
#if defined(_MSC_VER)
#if defined(LIBDIVIDE_X64_INTRINSICS)
#if defined(LIBDIVIDE_MULH_INTRINSICS) || !defined(__clang__)
#include <intrin.h>
#endif
#ifndef __clang__
#pragma warning(push)
// disable warning C4146: unary minus operator applied
// to unsigned type, result still unsigned
// 4146: unary minus operator applied to unsigned type, result still unsigned
#pragma warning(disable : 4146)
// disable warning C4204: nonstandard extension used : non-constant aggregate
// initializer
//
// It's valid C99
// 4204: nonstandard extension used : non-constant aggregate initializer
#pragma warning(disable : 4204)
#endif
#define LIBDIVIDE_VC
#endif
@ -95,10 +103,14 @@
#endif
#endif
#ifndef LIBDIVIDE_INLINE
#ifdef _MSC_VER
#define LIBDIVIDE_INLINE __forceinline
#else
#define LIBDIVIDE_INLINE inline
#endif
#endif
#if defined(__AVR__)
#if defined(__AVR__) || __STDC_HOSTED__ == 0
#define LIBDIVIDE_ERROR(msg)
#else
#define LIBDIVIDE_ERROR(msg) \
@ -108,7 +120,7 @@
} while (0)
#endif
#if defined(LIBDIVIDE_ASSERTIONS_ON) && !defined(__AVR__)
#if defined(LIBDIVIDE_ASSERTIONS_ON) && !defined(__AVR__) && __STDC_HOSTED__ != 0
#define LIBDIVIDE_ASSERT(x) \
do { \
if (!(x)) { \
@ -122,9 +134,67 @@
#endif
#ifdef __cplusplus
// Our __builtin_clz() implementation for the MSVC compiler
// requires C++20 or later for constexpr support.
#if defined(LIBDIVIDE_VC_CXX20)
#define LIBDIVIDE_CONSTEXPR_INLINE constexpr LIBDIVIDE_INLINE
// Use https://en.cppreference.com/w/cpp/feature_test#cpp_constexpr
// For constexpr zero initialization, c++11 might handle things ok,
// but just limit to at least c++14 to ensure we don't break anyone's code:
#elif (!defined(_MSC_VER) || defined(__clang__)) && \
defined(__cpp_constexpr) && __cpp_constexpr >= 201304L
#define LIBDIVIDE_CONSTEXPR_INLINE constexpr LIBDIVIDE_INLINE
#else
#define LIBDIVIDE_CONSTEXPR_INLINE LIBDIVIDE_INLINE
#endif
namespace libdivide {
#endif
#if defined(_MSC_VER) && !defined(__clang__)
// Required for C programming language
#ifndef LIBDIVIDE_CONSTEXPR_INLINE
#define LIBDIVIDE_CONSTEXPR_INLINE LIBDIVIDE_INLINE
#endif
static LIBDIVIDE_CONSTEXPR_INLINE int __builtin_clz(unsigned x) {
#if defined(LIBDIVIDE_VC_CXX20)
return std::countl_zero(x);
#elif defined(_M_ARM) || defined(_M_ARM64) || defined(_M_HYBRID_X86_ARM64) || defined(_M_ARM64EC)
return (int)_CountLeadingZeros(x);
#elif defined(__AVX2__) || defined(__LZCNT__)
return (int)_lzcnt_u32(x);
#else
unsigned long r;
_BitScanReverse(&r, x);
return (int)(r ^ 31);
#endif
}
static LIBDIVIDE_CONSTEXPR_INLINE int __builtin_clzll(unsigned long long x) {
#if defined(LIBDIVIDE_VC_CXX20)
return std::countl_zero(x);
#elif defined(_M_ARM) || defined(_M_ARM64) || defined(_M_HYBRID_X86_ARM64) || defined(_M_ARM64EC)
return (int)_CountLeadingZeros64(x);
#elif defined(_WIN64)
#if defined(__AVX2__) || defined(__LZCNT__)
return (int)_lzcnt_u64(x);
#else
unsigned long r;
_BitScanReverse64(&r, x);
return (int)(r ^ 63);
#endif
#else
int l = __builtin_clz((unsigned)x) + 32;
int h = __builtin_clz((unsigned)(x >> 32));
return !!((unsigned)(x >> 32)) ? h : l;
#endif
}
#endif // MSVC __builtin_clz()
// pack divider structs to prevent compilers from padding.
// This reduces memory usage by up to 43% when using a large
// array of libdivide dividers and improves performance
@ -334,7 +404,7 @@ static LIBDIVIDE_INLINE int32_t libdivide_mullhi_s32(int32_t x, int32_t y) {
}
static LIBDIVIDE_INLINE uint64_t libdivide_mullhi_u64(uint64_t x, uint64_t y) {
#if defined(LIBDIVIDE_X64_INTRINSICS)
#if defined(LIBDIVIDE_MULH_INTRINSICS)
return __umulh(x, y);
#elif defined(HAS_INT128_T)
__uint128_t xl = x, yl = y;
@ -360,7 +430,7 @@ static LIBDIVIDE_INLINE uint64_t libdivide_mullhi_u64(uint64_t x, uint64_t y) {
}
static LIBDIVIDE_INLINE int64_t libdivide_mullhi_s64(int64_t x, int64_t y) {
#if defined(LIBDIVIDE_X64_INTRINSICS)
#if defined(LIBDIVIDE_MULH_INTRINSICS)
return __mulh(x, y);
#elif defined(HAS_INT128_T)
__int128_t xl = x, yl = y;
@ -386,15 +456,9 @@ static LIBDIVIDE_INLINE int16_t libdivide_count_leading_zeros16(uint16_t val) {
// Fast way to count leading zeros
// On the AVR 8-bit architecture __builtin_clz() works on a int16_t.
return __builtin_clz(val);
#elif defined(__GNUC__) || __has_builtin(__builtin_clz)
#elif defined(__GNUC__) || __has_builtin(__builtin_clz) || defined(_MSC_VER)
// Fast way to count leading zeros
return __builtin_clz(val) - 16;
#elif defined(LIBDIVIDE_VC)
unsigned long result;
if (_BitScanReverse(&result, (unsigned long)val)) {
return (int16_t)(15 - result);
}
return 0;
return (int16_t)(__builtin_clz(val) - 16);
#else
if (val == 0) return 16;
int16_t result = 4;
@ -415,15 +479,9 @@ static LIBDIVIDE_INLINE int32_t libdivide_count_leading_zeros32(uint32_t val) {
#if defined(__AVR__)
// Fast way to count leading zeros
return __builtin_clzl(val);
#elif defined(__GNUC__) || __has_builtin(__builtin_clz)
#elif defined(__GNUC__) || __has_builtin(__builtin_clz) || defined(_MSC_VER)
// Fast way to count leading zeros
return __builtin_clz(val);
#elif defined(LIBDIVIDE_VC)
unsigned long result;
if (_BitScanReverse(&result, val)) {
return 31 - result;
}
return 0;
#else
if (val == 0) return 32;
int32_t result = 8;
@ -441,15 +499,9 @@ static LIBDIVIDE_INLINE int32_t libdivide_count_leading_zeros32(uint32_t val) {
}
static LIBDIVIDE_INLINE int32_t libdivide_count_leading_zeros64(uint64_t val) {
#if defined(__GNUC__) || __has_builtin(__builtin_clzll)
#if defined(__GNUC__) || __has_builtin(__builtin_clzll) || defined(_MSC_VER)
// Fast way to count leading zeros
return __builtin_clzll(val);
#elif defined(LIBDIVIDE_VC) && defined(_WIN64)
unsigned long result;
if (_BitScanReverse64(&result, val)) {
return 63 - result;
}
return 0;
#else
uint32_t hi = val >> 32;
uint32_t lo = val & 0xFFFFFFFF;
@ -496,7 +548,7 @@ static LIBDIVIDE_INLINE uint64_t libdivide_128_div_64_to_64(
// it's not LIBDIVIDE_INLINEd.
#if defined(LIBDIVIDE_X86_64) && defined(LIBDIVIDE_GCC_STYLE_ASM)
uint64_t result;
__asm__("divq %[v]" : "=a"(result), "=d"(*r) : [v] "r"(den), "a"(numlo), "d"(numhi));
__asm__("div %[v]" : "=a"(result), "=d"(*r) : [v] "r"(den), "a"(numlo), "d"(numhi));
return result;
#else
// We work in base 2**32.
@ -546,7 +598,7 @@ static LIBDIVIDE_INLINE uint64_t libdivide_128_div_64_to_64(
shift = libdivide_count_leading_zeros64(den);
den <<= shift;
numhi <<= shift;
numhi |= (numlo >> (-shift & 63)) & (-(int64_t)shift >> 63);
numhi |= (numlo >> (-shift & 63)) & (uint64_t)(-(int64_t)shift >> 63);
numlo <<= shift;
// Extract the low digits of the numerator and both digits of the denominator.
@ -755,11 +807,11 @@ static LIBDIVIDE_INLINE struct libdivide_u16_t libdivide_internal_u16_gen(
return result;
}
struct libdivide_u16_t libdivide_u16_gen(uint16_t d) {
static LIBDIVIDE_INLINE struct libdivide_u16_t libdivide_u16_gen(uint16_t d) {
return libdivide_internal_u16_gen(d, 0);
}
struct libdivide_u16_branchfree_t libdivide_u16_branchfree_gen(uint16_t d) {
static LIBDIVIDE_INLINE struct libdivide_u16_branchfree_t libdivide_u16_branchfree_gen(uint16_t d) {
if (d == 1) {
LIBDIVIDE_ERROR("branchfree divider must be != 1");
}
@ -772,11 +824,11 @@ struct libdivide_u16_branchfree_t libdivide_u16_branchfree_gen(uint16_t d) {
// The original libdivide_u16_do takes a const pointer. However, this cannot be used
// with a compile time constant libdivide_u16_t: it will generate a warning about
// taking the address of a temporary. Hence this overload.
uint16_t libdivide_u16_do_raw(uint16_t numer, uint16_t magic, uint8_t more) {
static LIBDIVIDE_INLINE uint16_t libdivide_u16_do_raw(uint16_t numer, uint16_t magic, uint8_t more) {
if (!magic) {
return numer >> more;
} else {
uint16_t q = libdivide_mullhi_u16(magic, numer);
uint16_t q = libdivide_mullhi_u16(numer, magic);
if (more & LIBDIVIDE_ADD_MARKER) {
uint16_t t = ((numer - q) >> 1) + q;
return t >> (more & LIBDIVIDE_16_SHIFT_MASK);
@ -788,18 +840,18 @@ uint16_t libdivide_u16_do_raw(uint16_t numer, uint16_t magic, uint8_t more) {
}
}
uint16_t libdivide_u16_do(uint16_t numer, const struct libdivide_u16_t *denom) {
static LIBDIVIDE_INLINE uint16_t libdivide_u16_do(uint16_t numer, const struct libdivide_u16_t *denom) {
return libdivide_u16_do_raw(numer, denom->magic, denom->more);
}
uint16_t libdivide_u16_branchfree_do(
static LIBDIVIDE_INLINE uint16_t libdivide_u16_branchfree_do(
uint16_t numer, const struct libdivide_u16_branchfree_t *denom) {
uint16_t q = libdivide_mullhi_u16(denom->magic, numer);
uint16_t q = libdivide_mullhi_u16(numer, denom->magic);
uint16_t t = ((numer - q) >> 1) + q;
return t >> denom->more;
}
uint16_t libdivide_u16_recover(const struct libdivide_u16_t *denom) {
static LIBDIVIDE_INLINE uint16_t libdivide_u16_recover(const struct libdivide_u16_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_16_SHIFT_MASK;
@ -837,7 +889,7 @@ uint16_t libdivide_u16_recover(const struct libdivide_u16_t *denom) {
}
}
uint16_t libdivide_u16_branchfree_recover(const struct libdivide_u16_branchfree_t *denom) {
static LIBDIVIDE_INLINE uint16_t libdivide_u16_branchfree_recover(const struct libdivide_u16_branchfree_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_16_SHIFT_MASK;
@ -919,11 +971,11 @@ static LIBDIVIDE_INLINE struct libdivide_u32_t libdivide_internal_u32_gen(
return result;
}
struct libdivide_u32_t libdivide_u32_gen(uint32_t d) {
static LIBDIVIDE_INLINE struct libdivide_u32_t libdivide_u32_gen(uint32_t d) {
return libdivide_internal_u32_gen(d, 0);
}
struct libdivide_u32_branchfree_t libdivide_u32_branchfree_gen(uint32_t d) {
static LIBDIVIDE_INLINE struct libdivide_u32_branchfree_t libdivide_u32_branchfree_gen(uint32_t d) {
if (d == 1) {
LIBDIVIDE_ERROR("branchfree divider must be != 1");
}
@ -933,11 +985,11 @@ struct libdivide_u32_branchfree_t libdivide_u32_branchfree_gen(uint32_t d) {
return ret;
}
uint32_t libdivide_u32_do_raw(uint32_t numer, uint32_t magic, uint8_t more) {
static LIBDIVIDE_INLINE uint32_t libdivide_u32_do_raw(uint32_t numer, uint32_t magic, uint8_t more) {
if (!magic) {
return numer >> more;
} else {
uint32_t q = libdivide_mullhi_u32(magic, numer);
uint32_t q = libdivide_mullhi_u32(numer, magic);
if (more & LIBDIVIDE_ADD_MARKER) {
uint32_t t = ((numer - q) >> 1) + q;
return t >> (more & LIBDIVIDE_32_SHIFT_MASK);
@ -949,18 +1001,18 @@ uint32_t libdivide_u32_do_raw(uint32_t numer, uint32_t magic, uint8_t more) {
}
}
uint32_t libdivide_u32_do(uint32_t numer, const struct libdivide_u32_t *denom) {
static LIBDIVIDE_INLINE uint32_t libdivide_u32_do(uint32_t numer, const struct libdivide_u32_t *denom) {
return libdivide_u32_do_raw(numer, denom->magic, denom->more);
}
uint32_t libdivide_u32_branchfree_do(
static LIBDIVIDE_INLINE uint32_t libdivide_u32_branchfree_do(
uint32_t numer, const struct libdivide_u32_branchfree_t *denom) {
uint32_t q = libdivide_mullhi_u32(denom->magic, numer);
uint32_t q = libdivide_mullhi_u32(numer, denom->magic);
uint32_t t = ((numer - q) >> 1) + q;
return t >> denom->more;
}
uint32_t libdivide_u32_recover(const struct libdivide_u32_t *denom) {
static LIBDIVIDE_INLINE uint32_t libdivide_u32_recover(const struct libdivide_u32_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK;
@ -998,7 +1050,7 @@ uint32_t libdivide_u32_recover(const struct libdivide_u32_t *denom) {
}
}
uint32_t libdivide_u32_branchfree_recover(const struct libdivide_u32_branchfree_t *denom) {
static LIBDIVIDE_INLINE uint32_t libdivide_u32_branchfree_recover(const struct libdivide_u32_branchfree_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK;
@ -1027,7 +1079,7 @@ uint32_t libdivide_u32_branchfree_recover(const struct libdivide_u32_branchfree_
}
}
/////////// UINT64
////////// UINT64
static LIBDIVIDE_INLINE struct libdivide_u64_t libdivide_internal_u64_gen(
uint64_t d, int branchfree) {
@ -1082,11 +1134,11 @@ static LIBDIVIDE_INLINE struct libdivide_u64_t libdivide_internal_u64_gen(
return result;
}
struct libdivide_u64_t libdivide_u64_gen(uint64_t d) {
static LIBDIVIDE_INLINE struct libdivide_u64_t libdivide_u64_gen(uint64_t d) {
return libdivide_internal_u64_gen(d, 0);
}
struct libdivide_u64_branchfree_t libdivide_u64_branchfree_gen(uint64_t d) {
static LIBDIVIDE_INLINE struct libdivide_u64_branchfree_t libdivide_u64_branchfree_gen(uint64_t d) {
if (d == 1) {
LIBDIVIDE_ERROR("branchfree divider must be != 1");
}
@ -1096,11 +1148,11 @@ struct libdivide_u64_branchfree_t libdivide_u64_branchfree_gen(uint64_t d) {
return ret;
}
uint64_t libdivide_u64_do_raw(uint64_t numer, uint64_t magic, uint8_t more) {
static LIBDIVIDE_INLINE uint64_t libdivide_u64_do_raw(uint64_t numer, uint64_t magic, uint8_t more) {
if (!magic) {
return numer >> more;
} else {
uint64_t q = libdivide_mullhi_u64(magic, numer);
uint64_t q = libdivide_mullhi_u64(numer, magic);
if (more & LIBDIVIDE_ADD_MARKER) {
uint64_t t = ((numer - q) >> 1) + q;
return t >> (more & LIBDIVIDE_64_SHIFT_MASK);
@ -1112,18 +1164,18 @@ uint64_t libdivide_u64_do_raw(uint64_t numer, uint64_t magic, uint8_t more) {
}
}
uint64_t libdivide_u64_do(uint64_t numer, const struct libdivide_u64_t *denom) {
static LIBDIVIDE_INLINE uint64_t libdivide_u64_do(uint64_t numer, const struct libdivide_u64_t *denom) {
return libdivide_u64_do_raw(numer, denom->magic, denom->more);
}
uint64_t libdivide_u64_branchfree_do(
static LIBDIVIDE_INLINE uint64_t libdivide_u64_branchfree_do(
uint64_t numer, const struct libdivide_u64_branchfree_t *denom) {
uint64_t q = libdivide_mullhi_u64(denom->magic, numer);
uint64_t q = libdivide_mullhi_u64(numer, denom->magic);
uint64_t t = ((numer - q) >> 1) + q;
return t >> denom->more;
}
uint64_t libdivide_u64_recover(const struct libdivide_u64_t *denom) {
static LIBDIVIDE_INLINE uint64_t libdivide_u64_recover(const struct libdivide_u64_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK;
@ -1167,7 +1219,7 @@ uint64_t libdivide_u64_recover(const struct libdivide_u64_t *denom) {
}
}
uint64_t libdivide_u64_branchfree_recover(const struct libdivide_u64_branchfree_t *denom) {
static LIBDIVIDE_INLINE uint64_t libdivide_u64_branchfree_recover(const struct libdivide_u64_branchfree_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK;
@ -1202,7 +1254,7 @@ uint64_t libdivide_u64_branchfree_recover(const struct libdivide_u64_branchfree_
}
}
/////////// SINT16
////////// SINT16
static LIBDIVIDE_INLINE struct libdivide_s16_t libdivide_internal_s16_gen(
int16_t d, int branchfree) {
@ -1270,11 +1322,11 @@ static LIBDIVIDE_INLINE struct libdivide_s16_t libdivide_internal_s16_gen(
return result;
}
struct libdivide_s16_t libdivide_s16_gen(int16_t d) {
static LIBDIVIDE_INLINE struct libdivide_s16_t libdivide_s16_gen(int16_t d) {
return libdivide_internal_s16_gen(d, 0);
}
struct libdivide_s16_branchfree_t libdivide_s16_branchfree_gen(int16_t d) {
static LIBDIVIDE_INLINE struct libdivide_s16_branchfree_t libdivide_s16_branchfree_gen(int16_t d) {
struct libdivide_s16_t tmp = libdivide_internal_s16_gen(d, 1);
struct libdivide_s16_branchfree_t result = {tmp.magic, tmp.more};
return result;
@ -1283,7 +1335,7 @@ struct libdivide_s16_branchfree_t libdivide_s16_branchfree_gen(int16_t d) {
// The original libdivide_s16_do takes a const pointer. However, this cannot be used
// with a compile time constant libdivide_s16_t: it will generate a warning about
// taking the address of a temporary. Hence this overload.
int16_t libdivide_s16_do_raw(int16_t numer, int16_t magic, uint8_t more) {
static LIBDIVIDE_INLINE int16_t libdivide_s16_do_raw(int16_t numer, int16_t magic, uint8_t more) {
uint8_t shift = more & LIBDIVIDE_16_SHIFT_MASK;
if (!magic) {
@ -1295,7 +1347,7 @@ int16_t libdivide_s16_do_raw(int16_t numer, int16_t magic, uint8_t more) {
q = (q ^ sign) - sign;
return q;
} else {
uint16_t uq = (uint16_t)libdivide_mullhi_s16(magic, numer);
uint16_t uq = (uint16_t)libdivide_mullhi_s16(numer, magic);
if (more & LIBDIVIDE_ADD_MARKER) {
// must be arithmetic shift and then sign extend
int16_t sign = (int8_t)more >> 7;
@ -1310,17 +1362,17 @@ int16_t libdivide_s16_do_raw(int16_t numer, int16_t magic, uint8_t more) {
}
}
int16_t libdivide_s16_do(int16_t numer, const struct libdivide_s16_t *denom) {
static LIBDIVIDE_INLINE int16_t libdivide_s16_do(int16_t numer, const struct libdivide_s16_t *denom) {
return libdivide_s16_do_raw(numer, denom->magic, denom->more);
}
int16_t libdivide_s16_branchfree_do(int16_t numer, const struct libdivide_s16_branchfree_t *denom) {
static LIBDIVIDE_INLINE int16_t libdivide_s16_branchfree_do(int16_t numer, const struct libdivide_s16_branchfree_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_16_SHIFT_MASK;
// must be arithmetic shift and then sign extend
int16_t sign = (int8_t)more >> 7;
int16_t magic = denom->magic;
int16_t q = libdivide_mullhi_s16(magic, numer);
int16_t q = libdivide_mullhi_s16(numer, magic);
q += numer;
// If q is non-negative, we have nothing to do
@ -1338,7 +1390,7 @@ int16_t libdivide_s16_branchfree_do(int16_t numer, const struct libdivide_s16_br
return q;
}
int16_t libdivide_s16_recover(const struct libdivide_s16_t *denom) {
static LIBDIVIDE_INLINE int16_t libdivide_s16_recover(const struct libdivide_s16_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_16_SHIFT_MASK;
if (!denom->magic) {
@ -1373,11 +1425,12 @@ int16_t libdivide_s16_recover(const struct libdivide_s16_t *denom) {
}
}
int16_t libdivide_s16_branchfree_recover(const struct libdivide_s16_branchfree_t *denom) {
return libdivide_s16_recover((const struct libdivide_s16_t *)denom);
static LIBDIVIDE_INLINE int16_t libdivide_s16_branchfree_recover(const struct libdivide_s16_branchfree_t *denom) {
const struct libdivide_s16_t den = {denom->magic, denom->more};
return libdivide_s16_recover(&den);
}
/////////// SINT32
////////// SINT32
static LIBDIVIDE_INLINE struct libdivide_s32_t libdivide_internal_s32_gen(
int32_t d, int branchfree) {
@ -1445,17 +1498,17 @@ static LIBDIVIDE_INLINE struct libdivide_s32_t libdivide_internal_s32_gen(
return result;
}
struct libdivide_s32_t libdivide_s32_gen(int32_t d) {
static LIBDIVIDE_INLINE struct libdivide_s32_t libdivide_s32_gen(int32_t d) {
return libdivide_internal_s32_gen(d, 0);
}
struct libdivide_s32_branchfree_t libdivide_s32_branchfree_gen(int32_t d) {
static LIBDIVIDE_INLINE struct libdivide_s32_branchfree_t libdivide_s32_branchfree_gen(int32_t d) {
struct libdivide_s32_t tmp = libdivide_internal_s32_gen(d, 1);
struct libdivide_s32_branchfree_t result = {tmp.magic, tmp.more};
return result;
}
int32_t libdivide_s32_do_raw(int32_t numer, int32_t magic, uint8_t more) {
static LIBDIVIDE_INLINE int32_t libdivide_s32_do_raw(int32_t numer, int32_t magic, uint8_t more) {
uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK;
if (!magic) {
@ -1467,7 +1520,7 @@ int32_t libdivide_s32_do_raw(int32_t numer, int32_t magic, uint8_t more) {
q = (q ^ sign) - sign;
return q;
} else {
uint32_t uq = (uint32_t)libdivide_mullhi_s32(magic, numer);
uint32_t uq = (uint32_t)libdivide_mullhi_s32(numer, magic);
if (more & LIBDIVIDE_ADD_MARKER) {
// must be arithmetic shift and then sign extend
int32_t sign = (int8_t)more >> 7;
@ -1482,17 +1535,17 @@ int32_t libdivide_s32_do_raw(int32_t numer, int32_t magic, uint8_t more) {
}
}
int32_t libdivide_s32_do(int32_t numer, const struct libdivide_s32_t *denom) {
static LIBDIVIDE_INLINE int32_t libdivide_s32_do(int32_t numer, const struct libdivide_s32_t *denom) {
return libdivide_s32_do_raw(numer, denom->magic, denom->more);
}
int32_t libdivide_s32_branchfree_do(int32_t numer, const struct libdivide_s32_branchfree_t *denom) {
static LIBDIVIDE_INLINE int32_t libdivide_s32_branchfree_do(int32_t numer, const struct libdivide_s32_branchfree_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK;
// must be arithmetic shift and then sign extend
int32_t sign = (int8_t)more >> 7;
int32_t magic = denom->magic;
int32_t q = libdivide_mullhi_s32(magic, numer);
int32_t q = libdivide_mullhi_s32(numer, magic);
q += numer;
// If q is non-negative, we have nothing to do
@ -1510,7 +1563,7 @@ int32_t libdivide_s32_branchfree_do(int32_t numer, const struct libdivide_s32_br
return q;
}
int32_t libdivide_s32_recover(const struct libdivide_s32_t *denom) {
static LIBDIVIDE_INLINE int32_t libdivide_s32_recover(const struct libdivide_s32_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_32_SHIFT_MASK;
if (!denom->magic) {
@ -1545,11 +1598,12 @@ int32_t libdivide_s32_recover(const struct libdivide_s32_t *denom) {
}
}
int32_t libdivide_s32_branchfree_recover(const struct libdivide_s32_branchfree_t *denom) {
return libdivide_s32_recover((const struct libdivide_s32_t *)denom);
static LIBDIVIDE_INLINE int32_t libdivide_s32_branchfree_recover(const struct libdivide_s32_branchfree_t *denom) {
const struct libdivide_s32_t den = {denom->magic, denom->more};
return libdivide_s32_recover(&den);
}
///////////// SINT64
////////// SINT64
static LIBDIVIDE_INLINE struct libdivide_s64_t libdivide_internal_s64_gen(
int64_t d, int branchfree) {
@ -1617,17 +1671,17 @@ static LIBDIVIDE_INLINE struct libdivide_s64_t libdivide_internal_s64_gen(
return result;
}
struct libdivide_s64_t libdivide_s64_gen(int64_t d) {
static LIBDIVIDE_INLINE struct libdivide_s64_t libdivide_s64_gen(int64_t d) {
return libdivide_internal_s64_gen(d, 0);
}
struct libdivide_s64_branchfree_t libdivide_s64_branchfree_gen(int64_t d) {
static LIBDIVIDE_INLINE struct libdivide_s64_branchfree_t libdivide_s64_branchfree_gen(int64_t d) {
struct libdivide_s64_t tmp = libdivide_internal_s64_gen(d, 1);
struct libdivide_s64_branchfree_t ret = {tmp.magic, tmp.more};
return ret;
}
int64_t libdivide_s64_do_raw(int64_t numer, int64_t magic, uint8_t more) {
static LIBDIVIDE_INLINE int64_t libdivide_s64_do_raw(int64_t numer, int64_t magic, uint8_t more) {
uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK;
if (!magic) { // shift path
@ -1640,7 +1694,7 @@ int64_t libdivide_s64_do_raw(int64_t numer, int64_t magic, uint8_t more) {
q = (q ^ sign) - sign;
return q;
} else {
uint64_t uq = (uint64_t)libdivide_mullhi_s64(magic, numer);
uint64_t uq = (uint64_t)libdivide_mullhi_s64(numer, magic);
if (more & LIBDIVIDE_ADD_MARKER) {
// must be arithmetic shift and then sign extend
int64_t sign = (int8_t)more >> 7;
@ -1655,17 +1709,17 @@ int64_t libdivide_s64_do_raw(int64_t numer, int64_t magic, uint8_t more) {
}
}
int64_t libdivide_s64_do(int64_t numer, const struct libdivide_s64_t *denom) {
static LIBDIVIDE_INLINE int64_t libdivide_s64_do(int64_t numer, const struct libdivide_s64_t *denom) {
return libdivide_s64_do_raw(numer, denom->magic, denom->more);
}
int64_t libdivide_s64_branchfree_do(int64_t numer, const struct libdivide_s64_branchfree_t *denom) {
static LIBDIVIDE_INLINE int64_t libdivide_s64_branchfree_do(int64_t numer, const struct libdivide_s64_branchfree_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK;
// must be arithmetic shift and then sign extend
int64_t sign = (int8_t)more >> 7;
int64_t magic = denom->magic;
int64_t q = libdivide_mullhi_s64(magic, numer);
int64_t q = libdivide_mullhi_s64(numer, magic);
q += numer;
// If q is non-negative, we have nothing to do.
@ -1683,7 +1737,7 @@ int64_t libdivide_s64_branchfree_do(int64_t numer, const struct libdivide_s64_br
return q;
}
int64_t libdivide_s64_recover(const struct libdivide_s64_t *denom) {
static LIBDIVIDE_INLINE int64_t libdivide_s64_recover(const struct libdivide_s64_t *denom) {
uint8_t more = denom->more;
uint8_t shift = more & LIBDIVIDE_64_SHIFT_MASK;
if (denom->magic == 0) { // shift path
@ -1709,8 +1763,9 @@ int64_t libdivide_s64_recover(const struct libdivide_s64_t *denom) {
}
}
int64_t libdivide_s64_branchfree_recover(const struct libdivide_s64_branchfree_t *denom) {
return libdivide_s64_recover((const struct libdivide_s64_t *)denom);
static LIBDIVIDE_INLINE int64_t libdivide_s64_branchfree_recover(const struct libdivide_s64_branchfree_t *denom) {
const struct libdivide_s64_t den = {denom->magic, denom->more};
return libdivide_s64_recover(&den);
}
// Simplest possible vector type division: treat the vector type as an array
@ -2751,7 +2806,7 @@ static LIBDIVIDE_INLINE __m128i libdivide_mullhi_s64_vec128(__m128i x, __m128i y
return p;
}
////////// UINT26
////////// UINT16
__m128i libdivide_u16_do_vec128(__m128i numers, const struct libdivide_u16_t *denom) {
uint8_t more = denom->more;
@ -2993,32 +3048,10 @@ __m128i libdivide_s64_branchfree_do_vec128(
#endif
/////////// C++ stuff
////////// C++ stuff
#ifdef __cplusplus
//for constexpr zero initialization,
//c++11 might handle things ok,
//but just limit to at least c++14 to ensure
//we don't break anyone's code:
// for gcc and clang, use https://en.cppreference.com/w/cpp/feature_test#cpp_constexpr
#if (defined(__GNUC__) || defined(__clang__)) && (__cpp_constexpr >= 201304L)
#define LIBDIVIDE_CONSTEXPR constexpr
// supposedly, MSVC might not implement feature test macros right (https://stackoverflow.com/questions/49316752/feature-test-macros-not-working-properly-in-visual-c)
// so check that _MSVC_LANG corresponds to at least c++14, and _MSC_VER corresponds to at least VS 2017 15.0 (for extended constexpr support https://learn.microsoft.com/en-us/cpp/overview/visual-cpp-language-conformance?view=msvc-170)
#elif defined(_MSC_VER) && _MSC_VER >= 1910 && defined(_MSVC_LANG) && _MSVC_LANG >=201402L
#define LIBDIVIDE_CONSTEXPR constexpr
// in case some other obscure compiler has the right __cpp_constexpr :
#elif defined(__cpp_constexpr) && __cpp_constexpr >= 201304L
#define LIBDIVIDE_CONSTEXPR constexpr
#else
#define LIBDIVIDE_CONSTEXPR LIBDIVIDE_INLINE
#endif
enum Branching {
BRANCHFULL, // use branching algorithms
BRANCHFREE // use branchfree algorithms
@ -3112,7 +3145,7 @@ struct NeonVecFor {
#define DISPATCHER_GEN(T, ALGO) \
libdivide_##ALGO##_t denom; \
LIBDIVIDE_INLINE dispatcher() {} \
explicit LIBDIVIDE_CONSTEXPR dispatcher(decltype(nullptr)) : denom{} {} \
explicit LIBDIVIDE_CONSTEXPR_INLINE dispatcher(decltype(nullptr)) : denom{} {} \
LIBDIVIDE_INLINE dispatcher(T d) : denom(libdivide_##ALGO##_gen(d)) {} \
LIBDIVIDE_INLINE T divide(T n) const { return libdivide_##ALGO##_do(n, &denom); } \
LIBDIVIDE_INLINE T recover() const { return libdivide_##ALGO##_recover(&denom); } \
@ -3205,7 +3238,7 @@ class divider {
divider() {}
// constexpr zero-initialization to allow for use w/ static constinit
explicit LIBDIVIDE_CONSTEXPR divider(decltype(nullptr)) : div(nullptr) {}
explicit LIBDIVIDE_CONSTEXPR_INLINE divider(decltype(nullptr)) : div(nullptr) {}
// Constructor that takes the divisor as a parameter
LIBDIVIDE_INLINE divider(T d) : div(d) {}
@ -3322,7 +3355,7 @@ using branchfree_divider = divider<T, BRANCHFREE>;
#endif // __cplusplus
#if defined(_MSC_VER)
#if defined(_MSC_VER) && !defined(__clang__)
#pragma warning(pop)
#endif