1/* XXX Emscripten XXX */2#if __EMSCRIPTEN__3// When building for wasm we export `malloc` and `emscripten_builtin_malloc` as4// weak alias of the internal `dlmalloc` which is static to this file.5#define DLMALLOC_EXPORT static6/* mmap uses malloc, so malloc can't use mmap */7#define HAVE_MMAP 08/* Emscripten's sbrk can interpret unsigned values greater than (MAX_SIZE_T / 2U) (2GB) correctly */9#define UNSIGNED_MORECORE 110/* we can only grow the heap up anyhow, so don't try to trim */11#define MORECORE_CANNOT_TRIM 112#ifndef DLMALLOC_DEBUG13/* dlmalloc has many checks, calls to abort() increase code size,14leave them only in debug builds */15#define ABORT __builtin_unreachable()16/* allow malloc stats only in debug builds, which brings in stdio code. */17#define NO_MALLOC_STATS 118#endif19/* XXX Emscripten Tracing API. This defines away the code if tracing is disabled. */20#include <emscripten/trace.h>2122#ifdef __EMSCRIPTEN_SHARED_MEMORY__23#define USE_LOCKS 124#endif2526/* Make malloc() and free() threadsafe by securing the memory allocations with pthread mutexes. */27#if __EMSCRIPTEN_PTHREADS__28#define USE_SPIN_LOCKS 0 // Ensure we use pthread_mutex_t.29#endif3031#ifndef MALLOC_ALIGNMENT32#include <stddef.h>33/* `malloc`ed pointers must be aligned at least as strictly as max_align_t. */34#define MALLOC_ALIGNMENT (__alignof__(max_align_t))35/*36Emscripten aligns even float128 to 64-bits, to save size and increase speed.37See https://github.com/emscripten-core/emscripten/issues/1007238*/39_Static_assert(MALLOC_ALIGNMENT == 8, "max_align_t must be 8");40#endif4142#endif // __EMSCRIPTEN__434445#define __THROW46#define __attribute_malloc__47#define __wur484950/*51This is a version (aka dlmalloc) of malloc/free/realloc written by52Doug Lea and released to the public domain, as explained at53http://creativecommons.org/publicdomain/zero/1.0/ Send questions,54comments, complaints, performance data, etc to [email protected]5556* Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea57Note: There may be an updated version of this malloc obtainable at58ftp://gee.cs.oswego.edu/pub/misc/malloc.c59Check before installing!6061* Quickstart6263This library is all in one file to simplify the most common usage:64ftp it, compile it (-O3), and link it into another program. All of65the compile-time options default to reasonable values for use on66most platforms. You might later want to step through various67compile-time and dynamic tuning options.6869For convenience, an include file for code using this malloc is at:70ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h71You don't really need this .h file unless you call functions not72defined in your system include files. The .h file contains only the73excerpts from this file needed for using this malloc on ANSI C/C++74systems, so long as you haven't changed compile-time options about75naming and tuning parameters. If you do, then you can create your76own malloc.h that does include all settings by cutting at the point77indicated below. Note that you may already by default be using a C78library containing a malloc that is based on some version of this79malloc (for example in linux). You might still want to use the one80in this file to customize settings or to avoid overheads associated81with library versions.8283* Vital statistics:8485Supported pointer/size_t representation: 4 or 8 bytes86size_t MUST be an unsigned type of the same width as87pointers. (If you are using an ancient system that declares88size_t as a signed type, or need it to be a different width89than pointers, you can use a previous release of this malloc90(e.g. 2.7.2) supporting these.)9192Alignment: 8 bytes (minimum)93This suffices for nearly all current machines and C compilers.94However, you can define MALLOC_ALIGNMENT to be wider than this95if necessary (up to 128bytes), at the expense of using more space.9697Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)988 or 16 bytes (if 8byte sizes)99Each malloced chunk has a hidden word of overhead holding size100and status information, and additional cross-check word101if FOOTERS is defined.102103Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)1048-byte ptrs: 32 bytes (including overhead)105106Even a request for zero bytes (i.e., malloc(0)) returns a107pointer to something of the minimum allocatable size.108The maximum overhead wastage (i.e., number of extra bytes109allocated than were requested in malloc) is less than or equal110to the minimum size, except for requests >= mmap_threshold that111are serviced via mmap(), where the worst case wastage is about11232 bytes plus the remainder from a system page (the minimal113mmap unit); typically 4096 or 8192 bytes.114115Security: static-safe; optionally more or less116The "security" of malloc refers to the ability of malicious117code to accentuate the effects of errors (for example, freeing118space that is not currently malloc'ed or overwriting past the119ends of chunks) in code that calls malloc. This malloc120guarantees not to modify any memory locations below the base of121heap, i.e., static variables, even in the presence of usage122errors. The routines additionally detect most improper frees123and reallocs. All this holds as long as the static bookkeeping124for malloc itself is not corrupted by some other means. This125is only one aspect of security -- these checks do not, and126cannot, detect all possible programming errors.127128If FOOTERS is defined nonzero, then each allocated chunk129carries an additional check word to verify that it was malloced130from its space. These check words are the same within each131execution of a program using malloc, but differ across132executions, so externally crafted fake chunks cannot be133freed. This improves security by rejecting frees/reallocs that134could corrupt heap memory, in addition to the checks preventing135writes to statics that are always on. This may further improve136security at the expense of time and space overhead. (Note that137FOOTERS may also be worth using with MSPACES.)138139By default detected errors cause the program to abort (calling140"abort()"). You can override this to instead proceed past141errors by defining PROCEED_ON_ERROR. In this case, a bad free142has no effect, and a malloc that encounters a bad address143caused by user overwrites will ignore the bad address by144dropping pointers and indices to all known memory. This may145be appropriate for programs that should continue if at all146possible in the face of programming errors, although they may147run out of memory because dropped memory is never reclaimed.148149If you don't like either of these options, you can define150CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything151else. And if if you are sure that your program using malloc has152no errors or vulnerabilities, you can define INSECURE to 1,153which might (or might not) provide a small performance improvement.154155It is also possible to limit the maximum total allocatable156space, using malloc_set_footprint_limit. This is not157designed as a security feature in itself (calls to set limits158are not screened or privileged), but may be useful as one159aspect of a secure implementation.160161Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero162When USE_LOCKS is defined, each public call to malloc, free,163etc is surrounded with a lock. By default, this uses a plain164pthread mutex, win32 critical section, or a spin-lock if if165available for the platform and not disabled by setting166USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,167recursive versions are used instead (which are not required for168base functionality but may be needed in layered extensions).169Using a global lock is not especially fast, and can be a major170bottleneck. It is designed only to provide minimal protection171in concurrent environments, and to provide a basis for172extensions. If you are using malloc in a concurrent program,173consider instead using nedmalloc174(http://www.nedprod.com/programs/portable/nedmalloc/) or175ptmalloc (See http://www.malloc.de), which are derived from176versions of this malloc.177178System requirements: Any combination of MORECORE and/or MMAP/MUNMAP179This malloc can use unix sbrk or any emulation (invoked using180the CALL_MORECORE macro) and/or mmap/munmap or any emulation181(invoked using CALL_MMAP/CALL_MUNMAP) to get and release system182memory. On most unix systems, it tends to work best if both183MORECORE and MMAP are enabled. On Win32, it uses emulations184based on VirtualAlloc. It also uses common C library functions185like memset.186187Compliance: I believe it is compliant with the Single Unix Specification188(See http://www.unix.org). Also SVID/XPG, ANSI C, and probably189others as well.190191* Overview of algorithms192193This is not the fastest, most space-conserving, most portable, or194most tunable malloc ever written. However it is among the fastest195while also being among the most space-conserving, portable and196tunable. Consistent balance across these factors results in a good197general-purpose allocator for malloc-intensive programs.198199In most ways, this malloc is a best-fit allocator. Generally, it200chooses the best-fitting existing chunk for a request, with ties201broken in approximately least-recently-used order. (This strategy202normally maintains low fragmentation.) However, for requests less203than 256bytes, it deviates from best-fit when there is not an204exactly fitting available chunk by preferring to use space adjacent205to that used for the previous small request, as well as by breaking206ties in approximately most-recently-used order. (These enhance207locality of series of small allocations.) And for very large requests208(>= 256Kb by default), it relies on system memory mapping209facilities, if supported. (This helps avoid carrying around and210possibly fragmenting memory used only for large chunks.)211212All operations (except malloc_stats and mallinfo) have execution213times that are bounded by a constant factor of the number of bits in214a size_t, not counting any clearing in calloc or copying in realloc,215or actions surrounding MORECORE and MMAP that have times216proportional to the number of non-contiguous regions returned by217system allocation routines, which is often just 1. In real-time218applications, you can optionally suppress segment traversals using219NO_SEGMENT_TRAVERSAL, which assures bounded execution even when220system allocators return non-contiguous spaces, at the typical221expense of carrying around more memory and increased fragmentation.222223The implementation is not very modular and seriously overuses224macros. Perhaps someday all C compilers will do as good a job225inlining modular code as can now be done by brute-force expansion,226but now, enough of them seem not to.227228Some compilers issue a lot of warnings about code that is229dead/unreachable only on some platforms, and also about intentional230uses of negation on unsigned types. All known cases of each can be231ignored.232233For a longer but out of date high-level description, see234http://gee.cs.oswego.edu/dl/html/malloc.html235236* MSPACES237If MSPACES is defined, then in addition to malloc, free, etc.,238this file also defines mspace_malloc, mspace_free, etc. These239are versions of malloc routines that take an "mspace" argument240obtained using create_mspace, to control all internal bookkeeping.241If ONLY_MSPACES is defined, only these versions are compiled.242So if you would like to use this allocator for only some allocations,243and your system malloc for others, you can compile with244ONLY_MSPACES and then do something like...245static mspace mymspace = create_mspace(0,0); // for example246#define mymalloc(bytes) mspace_malloc(mymspace, bytes)247248(Note: If you only need one instance of an mspace, you can instead249use "USE_DL_PREFIX" to relabel the global malloc.)250251You can similarly create thread-local allocators by storing252mspaces as thread-locals. For example:253static __thread mspace tlms = 0;254void* tlmalloc(size_t bytes) {255if (tlms == 0) tlms = create_mspace(0, 0);256return mspace_malloc(tlms, bytes);257}258void tlfree(void* mem) { mspace_free(tlms, mem); }259260Unless FOOTERS is defined, each mspace is completely independent.261You cannot allocate from one and free to another (although262conformance is only weakly checked, so usage errors are not always263caught). If FOOTERS is defined, then each chunk carries around a tag264indicating its originating mspace, and frees are directed to their265originating spaces. Normally, this requires use of locks.266267------------------------- Compile-time options ---------------------------268269Be careful in setting #define values for numerical constants of type270size_t. On some systems, literal values are not automatically extended271to size_t precision unless they are explicitly casted. You can also272use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.273274WIN32 default: defined if _WIN32 defined275Defining WIN32 sets up defaults for MS environment and compilers.276Otherwise defaults are for unix. Beware that there seem to be some277cases where this malloc might not be a pure drop-in replacement for278Win32 malloc: Random-looking failures from Win32 GDI API's (eg;279SetDIBits()) may be due to bugs in some video driver implementations280when pixel buffers are malloc()ed, and the region spans more than281one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)282default granularity, pixel buffers may straddle virtual allocation283regions more often than when using the Microsoft allocator. You can284avoid this by using VirtualAlloc() and VirtualFree() for all pixel285buffers rather than using malloc(). If this is not possible,286recompile this malloc with a larger DEFAULT_GRANULARITY. Note:287in cases where MSC and gcc (cygwin) are known to differ on WIN32,288conditions use _MSC_VER to distinguish them.289290DLMALLOC_EXPORT default: extern291Defines how public APIs are declared. If you want to export via a292Windows DLL, you might define this as293#define DLMALLOC_EXPORT extern __declspec(dllexport)294If you want a POSIX ELF shared object, you might use295#define DLMALLOC_EXPORT extern __attribute__((visibility("default")))296297MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))298Controls the minimum alignment for malloc'ed chunks. It must be a299power of two and at least 8, even on machines for which smaller300alignments would suffice. It may be defined as larger than this301though. Note however that code and data structures are optimized for302the case of 8-byte alignment.303304MSPACES default: 0 (false)305If true, compile in support for independent allocation spaces.306This is only supported if HAVE_MMAP is true.307308ONLY_MSPACES default: 0 (false)309If true, only compile in mspace versions, not regular versions.310311USE_LOCKS default: 0 (false)312Causes each call to each public routine to be surrounded with313pthread or WIN32 mutex lock/unlock. (If set true, this can be314overridden on a per-mspace basis for mspace versions.) If set to a315non-zero value other than 1, locks are used, but their316implementation is left out, so lock functions must be supplied manually,317as described below.318319USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available320If true, uses custom spin locks for locking. This is currently321supported only gcc >= 4.1, older gccs on x86 platforms, and recent322MS compilers. Otherwise, posix locks or win32 critical sections are323used.324325USE_RECURSIVE_LOCKS default: not defined326If defined nonzero, uses recursive (aka reentrant) locks, otherwise327uses plain mutexes. This is not required for malloc proper, but may328be needed for layered allocators such as nedmalloc.329330LOCK_AT_FORK default: not defined331If defined nonzero, performs pthread_atfork upon initialization332to initialize child lock while holding parent lock. The implementation333assumes that pthread locks (not custom locks) are being used. In other334cases, you may need to customize the implementation.335336FOOTERS default: 0337If true, provide extra checking and dispatching by placing338information in the footers of allocated chunks. This adds339space and time overhead.340341INSECURE default: 0342If true, omit checks for usage errors and heap space overwrites.343344USE_DL_PREFIX default: NOT defined345Causes compiler to prefix all public routines with the string 'dl'.346This can be useful when you only want to use this malloc in one part347of a program, using your regular system malloc elsewhere.348349MALLOC_INSPECT_ALL default: NOT defined350If defined, compiles malloc_inspect_all and mspace_inspect_all, that351perform traversal of all heap space. Unless access to these352functions is otherwise restricted, you probably do not want to353include them in secure implementations.354355ABORT default: defined as abort()356Defines how to abort on failed checks. On most systems, a failed357check cannot die with an "assert" or even print an informative358message, because the underlying print routines in turn call malloc,359which will fail again. Generally, the best policy is to simply call360abort(). It's not very useful to do more than this because many361errors due to overwriting will show up as address faults (null, odd362addresses etc) rather than malloc-triggered checks, so will also363abort. Also, most compilers know that abort() does not return, so364can better optimize code conditionally calling it.365366PROCEED_ON_ERROR default: defined as 0 (false)367Controls whether detected bad addresses cause them to bypassed368rather than aborting. If set, detected bad arguments to free and369realloc are ignored. And all bookkeeping information is zeroed out370upon a detected overwrite of freed heap space, thus losing the371ability to ever return it from malloc again, but enabling the372application to proceed. If PROCEED_ON_ERROR is defined, the373static variable malloc_corruption_error_count is compiled in374and can be examined to see if errors have occurred. This option375generates slower code than the default abort policy.376377DEBUG default: NOT defined378The DEBUG setting is mainly intended for people trying to modify379this code or diagnose problems when porting to new platforms.380However, it may also be able to better isolate user errors than just381using runtime checks. The assertions in the check routines spell382out in more detail the assumptions and invariants underlying the383algorithms. The checking is fairly extensive, and will slow down384execution noticeably. Calling malloc_stats or mallinfo with DEBUG385set will attempt to check every non-mmapped allocated and free chunk386in the course of computing the summaries.387388ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)389Debugging assertion failures can be nearly impossible if your390version of the assert macro causes malloc to be called, which will391lead to a cascade of further failures, blowing the runtime stack.392ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),393which will usually make debugging easier.394395MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32396The action to take before "return 0" when malloc fails to be able to397return memory because there is none available.398399HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES400True if this system supports sbrk or an emulation of it.401402MORECORE default: sbrk403The name of the sbrk-style system routine to call to obtain more404memory. See below for guidance on writing custom MORECORE405functions. The type of the argument to sbrk/MORECORE varies across406systems. It cannot be size_t, because it supports negative407arguments, so it is normally the signed type of the same width as408size_t (sometimes declared as "intptr_t"). It doesn't much matter409though. Internally, we only call it with arguments less than half410the max value of a size_t, which should work across all reasonable411possibilities, although sometimes generating compiler warnings.412413MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE414If true, take advantage of fact that consecutive calls to MORECORE415with positive arguments always return contiguous increasing416addresses. This is true of unix sbrk. It does not hurt too much to417set it true anyway, since malloc copes with non-contiguities.418Setting it false when definitely non-contiguous saves time419and possibly wasted space it would take to discover this though.420421UNSIGNED_MORECORE default: 0 (false)422True if MORECORE can only handle unsigned arguments. This sets423MORECORE_CANNOT_TRIM to 1 (true).424425MORECORE_CANNOT_TRIM default: NOT defined426True if MORECORE cannot release space back to the system when given427negative arguments. This is generally necessary only if you are428using a hand-crafted MORECORE function that cannot handle negative429arguments.430431NO_SEGMENT_TRAVERSAL default: 0432If non-zero, suppresses traversals of memory segments433returned by either MORECORE or CALL_MMAP. This disables434merging of segments that are contiguous, and selectively435releasing them to the OS if unused, but bounds execution times.436437HAVE_MMAP default: 1 (true)438True if this system supports mmap or an emulation of it. If so, and439HAVE_MORECORE is not true, MMAP is used for all system440allocation. If set and HAVE_MORECORE is true as well, MMAP is441primarily used to directly allocate very large blocks. It is also442used as a backup strategy in cases where MORECORE fails to provide443space from system. Note: A single call to MUNMAP is assumed to be444able to unmap memory that may have be allocated using multiple calls445to MMAP, so long as they are adjacent.446447HAVE_MREMAP default: 1 on linux, else 0448If true realloc() uses mremap() to re-allocate large blocks and449extend or shrink allocation spaces.450451MMAP_CLEARS default: 1 except on WINCE.452True if mmap clears memory so calloc doesn't need to. This is true453for standard unix mmap using /dev/zero and on WIN32 except for WINCE.454455USE_BUILTIN_FFS default: 0 (i.e., not used)456Causes malloc to use the builtin ffs() function to compute indices.457Some compilers may recognize and intrinsify ffs to be faster than the458supplied C version. Also, the case of x86 using gcc is special-cased459to an asm instruction, so is already as fast as it can be, and so460this setting has no effect. Similarly for Win32 under recent MS compilers.461(On most x86s, the asm version is only slightly faster than the C version.)462463malloc_getpagesize default: derive from system includes, or 4096.464The system page size. To the extent possible, this malloc manages465memory from the system in page-size units. This may be (and466usually is) a function rather than a constant. This is ignored467if WIN32, where page size is determined using getSystemInfo during468initialization.469470USE_DEV_RANDOM default: 0 (i.e., not used)471Causes malloc to use /dev/random to initialize secure magic seed for472stamping footers. Otherwise, the current time is used.473474NO_MALLINFO default: 0475If defined, don't compile "mallinfo". This can be a simple way476of dealing with mismatches between system declarations and477those in this file.478479MALLINFO_FIELD_TYPE default: size_t480The type of the fields in the mallinfo struct. This was originally481defined as "int" in SVID etc, but is more usefully defined as482size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set483484NO_MALLOC_STATS default: 0485If defined, don't compile "malloc_stats". This avoids calls to486fprintf and bringing in stdio dependencies you might not want.487488REALLOC_ZERO_BYTES_FREES default: not defined489This should be set if a call to realloc with zero bytes should490be the same as a call to free. Some people think it should. Otherwise,491since this malloc returns a unique pointer for malloc(0), so does492realloc(p, 0).493494LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H495LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H496LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32497Define these if your system does not have these header files.498You might need to manually insert some of the declarations they provide.499500DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,501system_info.dwAllocationGranularity in WIN32,502otherwise 64K.503Also settable using mallopt(M_GRANULARITY, x)504The unit for allocating and deallocating memory from the system. On505most systems with contiguous MORECORE, there is no reason to506make this more than a page. However, systems with MMAP tend to507either require or encourage larger granularities. You can increase508this value to prevent system allocation functions to be called so509often, especially if they are slow. The value must be at least one510page and must be a power of two. Setting to 0 causes initialization511to either page size or win32 region size. (Note: In previous512versions of malloc, the equivalent of this option was called513"TOP_PAD")514515DEFAULT_TRIM_THRESHOLD default: 2MB516Also settable using mallopt(M_TRIM_THRESHOLD, x)517The maximum amount of unused top-most memory to keep before518releasing via malloc_trim in free(). Automatic trimming is mainly519useful in long-lived programs using contiguous MORECORE. Because520trimming via sbrk can be slow on some systems, and can sometimes be521wasteful (in cases where programs immediately afterward allocate522more large chunks) the value should be high enough so that your523overall system performance would improve by releasing this much524memory. As a rough guide, you might set to a value close to the525average size of a process (program) running on your system.526Releasing this much memory would allow such a process to run in527memory. Generally, it is worth tuning trim thresholds when a528program undergoes phases where several large chunks are allocated529and released in ways that can reuse each other's storage, perhaps530mixed with phases where there are no such chunks at all. The trim531value must be greater than page size to have any useful effect. To532disable trimming completely, you can set to MAX_SIZE_T. Note that the trick533some people use of mallocing a huge space and then freeing it at534program startup, in an attempt to reserve system memory, doesn't535have the intended effect under automatic trimming, since that memory536will immediately be returned to the system.537538DEFAULT_MMAP_THRESHOLD default: 256K539Also settable using mallopt(M_MMAP_THRESHOLD, x)540The request size threshold for using MMAP to directly service a541request. Requests of at least this size that cannot be allocated542using already-existing space will be serviced via mmap. (If enough543normal freed space already exists it is used instead.) Using mmap544segregates relatively large chunks of memory so that they can be545individually obtained and released from the host system. A request546serviced through mmap is never reused by any other request (at least547not directly; the system may just so happen to remap successive548requests to the same locations). Segregating space in this way has549the benefits that: Mmapped space can always be individually released550back to the system, which helps keep the system level memory demands551of a long-lived program low. Also, mapped memory doesn't become552`locked' between other chunks, as can happen with normally allocated553chunks, which means that even trimming via malloc_trim would not554release them. However, it has the disadvantage that the space555cannot be reclaimed, consolidated, and then used to service later556requests, as happens with normal chunks. The advantages of mmap557nearly always outweigh disadvantages for "large" chunks, but the558value of "large" may vary across systems. The default is an559empirically derived value that works well in most systems. You can560disable mmap by setting to MAX_SIZE_T.561562MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP563The number of consolidated frees between checks to release564unused segments when freeing. When using non-contiguous segments,565especially with multiple mspaces, checking only for topmost space566doesn't always suffice to trigger trimming. To compensate for this,567free() will, with a period of MAX_RELEASE_CHECK_RATE (or the568current number of segments, if greater) try to release unused569segments to the OS when freeing chunks that result in570consolidation. The best value for this parameter is a compromise571between slowing down frees with relatively costly checks that572rarely trigger versus holding on to unused memory. To effectively573disable, set to MAX_SIZE_T. This may lead to a very slight speed574improvement at the expense of carrying around more memory.575*/576577/* Version identifier to allow people to support multiple versions */578#ifndef DLMALLOC_VERSION579#define DLMALLOC_VERSION 20806580#endif /* DLMALLOC_VERSION */581582#ifndef DLMALLOC_EXPORT583#define DLMALLOC_EXPORT extern584#endif585586#ifndef WIN32587#ifdef _WIN32588#define WIN32 1589#endif /* _WIN32 */590#ifdef _WIN32_WCE591#define LACKS_FCNTL_H592#define WIN32 1593#endif /* _WIN32_WCE */594#endif /* WIN32 */595#ifdef WIN32596#define WIN32_LEAN_AND_MEAN597#include <windows.h>598#include <tchar.h>599#define HAVE_MMAP 1600#define HAVE_MORECORE 0601#define LACKS_UNISTD_H602#define LACKS_SYS_PARAM_H603#define LACKS_SYS_MMAN_H604#define LACKS_STRING_H605#define LACKS_STRINGS_H606#define LACKS_SYS_TYPES_H607#define LACKS_ERRNO_H608#define LACKS_SCHED_H609#ifndef MALLOC_FAILURE_ACTION610#define MALLOC_FAILURE_ACTION611#endif /* MALLOC_FAILURE_ACTION */612#ifndef MMAP_CLEARS613#ifdef _WIN32_WCE /* WINCE reportedly does not clear */614#define MMAP_CLEARS 0615#else616#define MMAP_CLEARS 1617#endif /* _WIN32_WCE */618#endif /*MMAP_CLEARS */619#endif /* WIN32 */620621#if defined(DARWIN) || defined(_DARWIN)622/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */623#ifndef HAVE_MORECORE624#define HAVE_MORECORE 0625#define HAVE_MMAP 1626/* OSX allocators provide 16 byte alignment */627#ifndef MALLOC_ALIGNMENT628#define MALLOC_ALIGNMENT ((size_t)16U)629#endif630#endif /* HAVE_MORECORE */631#endif /* DARWIN */632633#ifndef LACKS_SYS_TYPES_H634#include <sys/types.h> /* For size_t */635#endif /* LACKS_SYS_TYPES_H */636637/* The maximum possible size_t value has all bits set */638#define MAX_SIZE_T (~(size_t)0)639640#ifndef USE_LOCKS /* ensure true if spin or recursive locks set */641/* XXX: The following block adapted locally to avoid642clean up new Clang -Wexpansion-to-defined warnings.643http://lists.llvm.org/pipermail/cfe-commits/Week-of-Mon-20160118/147239.html */644#if (defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \645(defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0)646#define USE_LOCKS 1647#else648#define USE_LOCKS 0649#endif650#endif /* USE_LOCKS */651652#if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */653#if ((defined(__GNUC__) && \654((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \655defined(__i386__) || defined(__x86_64__))) || \656(defined(_MSC_VER) && _MSC_VER>=1310))657#ifndef USE_SPIN_LOCKS658#define USE_SPIN_LOCKS 1659#endif /* USE_SPIN_LOCKS */660#elif USE_SPIN_LOCKS661#error "USE_SPIN_LOCKS defined without implementation"662#endif /* ... locks available... */663#elif !defined(USE_SPIN_LOCKS)664#define USE_SPIN_LOCKS 0665#endif /* USE_LOCKS */666667#ifndef ONLY_MSPACES668#define ONLY_MSPACES 0669#endif /* ONLY_MSPACES */670#ifndef MSPACES671#if ONLY_MSPACES672#define MSPACES 1673#else /* ONLY_MSPACES */674#define MSPACES 0675#endif /* ONLY_MSPACES */676#endif /* MSPACES */677#ifndef MALLOC_ALIGNMENT678#define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))679#endif /* MALLOC_ALIGNMENT */680#ifndef FOOTERS681#define FOOTERS 0682#endif /* FOOTERS */683#ifndef ABORT684#define ABORT abort()685#endif /* ABORT */686#ifndef ABORT_ON_ASSERT_FAILURE687#define ABORT_ON_ASSERT_FAILURE 1688#endif /* ABORT_ON_ASSERT_FAILURE */689#ifndef PROCEED_ON_ERROR690#define PROCEED_ON_ERROR 0691#endif /* PROCEED_ON_ERROR */692693#ifndef INSECURE694#define INSECURE 0695#endif /* INSECURE */696#ifndef MALLOC_INSPECT_ALL697#define MALLOC_INSPECT_ALL 0698#endif /* MALLOC_INSPECT_ALL */699#ifndef HAVE_MMAP700#define HAVE_MMAP 1701#endif /* HAVE_MMAP */702#ifndef MMAP_CLEARS703#define MMAP_CLEARS 1704#endif /* MMAP_CLEARS */705#ifndef HAVE_MREMAP706#ifdef linux707#define HAVE_MREMAP 1708#define _GNU_SOURCE /* Turns on mremap() definition */709#else /* linux */710#define HAVE_MREMAP 0711#endif /* linux */712#endif /* HAVE_MREMAP */713#ifndef MALLOC_FAILURE_ACTION714#define MALLOC_FAILURE_ACTION errno = ENOMEM;715#endif /* MALLOC_FAILURE_ACTION */716#ifndef HAVE_MORECORE717#if ONLY_MSPACES718#define HAVE_MORECORE 0719#else /* ONLY_MSPACES */720#define HAVE_MORECORE 1721#endif /* ONLY_MSPACES */722#endif /* HAVE_MORECORE */723#ifndef UNSIGNED_MORECORE724#define UNSIGNED_MORECORE 0725#endif /* UNSIGNED_MORECORE */726#if UNSIGNED_MORECORE727#define MORECORE_CANNOT_TRIM 1728#endif /* UNSIGNED_MORECORE */729#if !HAVE_MORECORE730#define MORECORE_CONTIGUOUS 0731#else /* !HAVE_MORECORE */732#define MORECORE_DEFAULT sbrk733#ifndef MORECORE_CONTIGUOUS734#define MORECORE_CONTIGUOUS 1735#endif /* MORECORE_CONTIGUOUS */736#endif /* HAVE_MORECORE */737#ifndef DEFAULT_GRANULARITY738#if (MORECORE_CONTIGUOUS || defined(WIN32))739#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */740#else /* MORECORE_CONTIGUOUS */741#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)742#endif /* MORECORE_CONTIGUOUS */743#endif /* DEFAULT_GRANULARITY */744#ifndef DEFAULT_TRIM_THRESHOLD745#ifndef MORECORE_CANNOT_TRIM746#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)747#else /* MORECORE_CANNOT_TRIM */748#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T749#endif /* MORECORE_CANNOT_TRIM */750#endif /* DEFAULT_TRIM_THRESHOLD */751#ifndef DEFAULT_MMAP_THRESHOLD752#if HAVE_MMAP753#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)754#else /* HAVE_MMAP */755#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T756#endif /* HAVE_MMAP */757#endif /* DEFAULT_MMAP_THRESHOLD */758#ifndef MAX_RELEASE_CHECK_RATE759#if HAVE_MMAP760#define MAX_RELEASE_CHECK_RATE 4095761#else762#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T763#endif /* HAVE_MMAP */764#endif /* MAX_RELEASE_CHECK_RATE */765#ifndef USE_BUILTIN_FFS766#define USE_BUILTIN_FFS 0767#endif /* USE_BUILTIN_FFS */768#ifndef USE_DEV_RANDOM769#define USE_DEV_RANDOM 0770#endif /* USE_DEV_RANDOM */771#ifndef NO_MALLINFO772#define NO_MALLINFO 0773#endif /* NO_MALLINFO */774#ifndef MALLINFO_FIELD_TYPE775#define MALLINFO_FIELD_TYPE size_t776#endif /* MALLINFO_FIELD_TYPE */777#ifndef NO_MALLOC_STATS778#define NO_MALLOC_STATS 0779#endif /* NO_MALLOC_STATS */780#ifndef NO_SEGMENT_TRAVERSAL781#define NO_SEGMENT_TRAVERSAL 0782#endif /* NO_SEGMENT_TRAVERSAL */783784/*785mallopt tuning options. SVID/XPG defines four standard parameter786numbers for mallopt, normally defined in malloc.h. None of these787are used in this malloc, so setting them has no effect. But this788malloc does support the following options.789*/790791#define M_TRIM_THRESHOLD (-1)792#define M_GRANULARITY (-2)793#define M_MMAP_THRESHOLD (-3)794795/* ------------------------ Mallinfo declarations ------------------------ */796797#if !NO_MALLINFO798/*799This version of malloc supports the standard SVID/XPG mallinfo800routine that returns a struct containing usage properties and801statistics. It should work on any system that has a802/usr/include/malloc.h defining struct mallinfo. The main803declaration needed is the mallinfo struct that is returned (by-copy)804by mallinfo(). The malloinfo struct contains a bunch of fields that805are not even meaningful in this version of malloc. These fields are806are instead filled by mallinfo() with other numbers that might be of807interest.808809HAVE_USR_INCLUDE_MALLOC_H should be set if you have a810/usr/include/malloc.h file that includes a declaration of struct811mallinfo. If so, it is included; else a compliant version is812declared below. These must be precisely the same for mallinfo() to813work. The original SVID version of this struct, defined on most814systems with mallinfo, declares all fields as ints. But some others815define as unsigned long. If your system defines the fields using a816type of different width than listed here, you MUST #include your817system version and #define HAVE_USR_INCLUDE_MALLOC_H.818*/819820/* #define HAVE_USR_INCLUDE_MALLOC_H */821822#ifdef HAVE_USR_INCLUDE_MALLOC_H823#include "/usr/include/malloc.h"824#else /* HAVE_USR_INCLUDE_MALLOC_H */825#ifndef STRUCT_MALLINFO_DECLARED826/* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */827#define _STRUCT_MALLINFO828#define STRUCT_MALLINFO_DECLARED 1829struct mallinfo {830MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */831MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */832MALLINFO_FIELD_TYPE smblks; /* always 0 */833MALLINFO_FIELD_TYPE hblks; /* always 0 */834MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */835MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */836MALLINFO_FIELD_TYPE fsmblks; /* always 0 */837MALLINFO_FIELD_TYPE uordblks; /* total allocated space */838MALLINFO_FIELD_TYPE fordblks; /* total free space */839MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */840};841#endif /* STRUCT_MALLINFO_DECLARED */842#endif /* HAVE_USR_INCLUDE_MALLOC_H */843#endif /* NO_MALLINFO */844845/*846Try to persuade compilers to inline. The most critical functions for847inlining are defined as macros, so these aren't used for them.848*/849850#ifndef FORCEINLINE851#if defined(__GNUC__)852#define FORCEINLINE __inline __attribute__ ((always_inline))853#elif defined(_MSC_VER)854#define FORCEINLINE __forceinline855#endif856#endif857#ifndef NOINLINE858#if defined(__GNUC__)859#define NOINLINE __attribute__ ((noinline))860#elif defined(_MSC_VER)861#define NOINLINE __declspec(noinline)862#else863#define NOINLINE864#endif865#endif866867#ifdef __cplusplus868extern "C" {869#ifndef FORCEINLINE870#define FORCEINLINE inline871#endif872#endif /* __cplusplus */873#ifndef FORCEINLINE874#define FORCEINLINE875#endif876877#if !ONLY_MSPACES878879/* ------------------- Declarations of public routines ------------------- */880881#ifndef USE_DL_PREFIX882// XXX Emscripten XXX883#if defined(__EMSCRIPTEN__)884void* __libc_malloc(size_t) __attribute__((weak, alias("dlmalloc")));885void __libc_free(void*) __attribute__((weak, alias("dlfree")));886void* __libc_calloc(size_t) __attribute__((weak, alias("dlcalloc")));887void* __libc_realloc(void*, size_t) __attribute__((weak, alias("dlrealloc")));888void* malloc(size_t) __attribute__((weak, alias("dlmalloc")));889void free(void*) __attribute__((weak, alias("dlfree")));890void* calloc(size_t, size_t) __attribute__((weak, alias("dlcalloc")));891void* realloc(void*, size_t) __attribute__((weak, alias("dlrealloc")));892void* realloc_in_place(void*, size_t) __attribute__((weak, alias("dlrealloc_in_place")));893void* memalign(size_t, size_t) __attribute__((weak, alias("dlmemalign")));894int posix_memalign(void**, size_t, size_t) __attribute__((weak, alias("dlposix_memalign")));895void* valloc(size_t) __attribute__((weak, alias("dlvalloc")));896void* pvalloc(size_t) __attribute__((weak, alias("dlpvalloc")));897#if !NO_MALLINFO898struct mallinfo mallinfo(void) __attribute__((weak, alias("dlmallinfo")));899#endif900int mallopt(int, int) __attribute__((weak, alias("dlmallopt")));901int malloc_trim(size_t) __attribute__((weak, alias("dlmalloc_trim")));902#if !NO_MALLOC_STATS903void malloc_stats(void) __attribute__((weak, alias("dlmalloc_stats")));904#endif905size_t malloc_usable_size(const void*) __attribute__((weak, alias("dlmalloc_usable_size")));906size_t malloc_footprint(void) __attribute__((weak, alias("dlmalloc_footprint")));907size_t malloc_max_footprint(void) __attribute__((weak, alias("dlmalloc_max_footprint")));908size_t malloc_footprint_limit(void) __attribute__((weak, alias("dlmalloc_footprint_limit")));909size_t malloc_set_footprint_limit(size_t bytes) __attribute__((weak, alias("dlmalloc_set_footprint_limit")));910#if MALLOC_INSPECT_ALL911void malloc_inspect_all(void(*handler)(void*, void *, size_t, void*), void* arg) __attribute__((weak, alias("dlmalloc_inspect_all")));912#endif913void** independent_calloc(size_t, size_t, void**) __attribute__((weak, alias("dlindependent_calloc")));914void** independent_comalloc(size_t, size_t*, void**) __attribute__((weak, alias("dlindependent_comalloc")));915size_t bulk_free(void**, size_t n_elements) __attribute__((weak, alias("dlbulk_free")));916#endif /*__EMSCRIPTEN__*/917#endif /* USE_DL_PREFIX */918919/*920malloc(size_t n)921Returns a pointer to a newly allocated chunk of at least n bytes, or922null if no space is available, in which case errno is set to ENOMEM923on ANSI C systems.924925If n is zero, malloc returns a minimum-sized chunk. (The minimum926size is 16 bytes on most 32bit systems, and 32 bytes on 64bit927systems.) Note that size_t is an unsigned type, so calls with928arguments that would be negative if signed are interpreted as929requests for huge amounts of space, which will often fail. The930maximum supported value of n differs across systems, but is in all931cases less than the maximum representable value of a size_t.932*/933DLMALLOC_EXPORT void* dlmalloc(size_t);934935/*936free(void* p)937Releases the chunk of memory pointed to by p, that had been previously938allocated using malloc or a related routine such as realloc.939It has no effect if p is null. If p was not malloced or already940freed, free(p) will by default cause the current program to abort.941*/942DLMALLOC_EXPORT void dlfree(void*);943944/*945calloc(size_t n_elements, size_t element_size);946Returns a pointer to n_elements * element_size bytes, with all locations947set to zero.948*/949DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);950951/*952realloc(void* p, size_t n)953Returns a pointer to a chunk of size n that contains the same data954as does chunk p up to the minimum of (n, p's size) bytes, or null955if no space is available.956957The returned pointer may or may not be the same as p. The algorithm958prefers extending p in most cases when possible, otherwise it959employs the equivalent of a malloc-copy-free sequence.960961If p is null, realloc is equivalent to malloc.962963If space is not available, realloc returns null, errno is set (if on964ANSI) and p is NOT freed.965966if n is for fewer bytes than already held by p, the newly unused967space is lopped off and freed if possible. realloc with a size968argument of zero (re)allocates a minimum-sized chunk.969970The old unix realloc convention of allowing the last-free'd chunk971to be used as an argument to realloc is not supported.972*/973DLMALLOC_EXPORT void* dlrealloc(void*, size_t);974975/*976realloc_in_place(void* p, size_t n)977Resizes the space allocated for p to size n, only if this can be978done without moving p (i.e., only if there is adjacent space979available if n is greater than p's current allocated size, or n is980less than or equal to p's size). This may be used instead of plain981realloc if an alternative allocation strategy is needed upon failure982to expand space; for example, reallocation of a buffer that must be983memory-aligned or cleared. You can use realloc_in_place to trigger984these alternatives only when needed.985986Returns p if successful; otherwise null.987*/988DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);989990/*991memalign(size_t alignment, size_t n);992Returns a pointer to a newly allocated chunk of n bytes, aligned993in accord with the alignment argument.994995The alignment argument should be a power of two. If the argument is996not a power of two, the nearest greater power is used.9978-byte alignment is guaranteed by normal malloc calls, so don't998bother calling memalign with an argument of 8 or less.9991000Overreliance on memalign is a sure way to fragment space.1001*/1002DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);10031004/*1005int posix_memalign(void** pp, size_t alignment, size_t n);1006Allocates a chunk of n bytes, aligned in accord with the alignment1007argument. Differs from memalign only in that it (1) assigns the1008allocated memory to *pp rather than returning it, (2) fails and1009returns EINVAL if the alignment is not a power of two (3) fails and1010returns ENOMEM if memory cannot be allocated.1011*/1012DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);10131014/*1015valloc(size_t n);1016Equivalent to memalign(pagesize, n), where pagesize is the page1017size of the system. If the pagesize is unknown, 4096 is used.1018*/1019DLMALLOC_EXPORT void* dlvalloc(size_t);10201021/*1022mallopt(int parameter_number, int parameter_value)1023Sets tunable parameters The format is to provide a1024(parameter-number, parameter-value) pair. mallopt then sets the1025corresponding parameter to the argument value if it can (i.e., so1026long as the value is meaningful), and returns 1 if successful else10270. To workaround the fact that mallopt is specified to use int,1028not size_t parameters, the value -1 is specially treated as the1029maximum unsigned size_t value.10301031SVID/XPG/ANSI defines four standard param numbers for mallopt,1032normally defined in malloc.h. None of these are use in this malloc,1033so setting them has no effect. But this malloc also supports other1034options in mallopt. See below for details. Briefly, supported1035parameters are as follows (listed defaults are for "typical"1036configurations).10371038Symbol param # default allowed param values1039M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)1040M_GRANULARITY -2 page size any power of 2 >= page size1041M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)1042*/1043DLMALLOC_EXPORT int dlmallopt(int, int);10441045/*1046malloc_footprint();1047Returns the number of bytes obtained from the system. The total1048number of bytes allocated by malloc, realloc etc., is less than this1049value. Unlike mallinfo, this function returns only a precomputed1050result, so can be called frequently to monitor memory consumption.1051Even if locks are otherwise defined, this function does not use them,1052so results might not be up to date.1053*/1054DLMALLOC_EXPORT size_t dlmalloc_footprint(void);10551056/*1057malloc_max_footprint();1058Returns the maximum number of bytes obtained from the system. This1059value will be greater than current footprint if deallocated space1060has been reclaimed by the system. The peak number of bytes allocated1061by malloc, realloc etc., is less than this value. Unlike mallinfo,1062this function returns only a precomputed result, so can be called1063frequently to monitor memory consumption. Even if locks are1064otherwise defined, this function does not use them, so results might1065not be up to date.1066*/1067DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);10681069/*1070malloc_footprint_limit();1071Returns the number of bytes that the heap is allowed to obtain from1072the system, returning the last value returned by1073malloc_set_footprint_limit, or the maximum size_t value if1074never set. The returned value reflects a permission. There is no1075guarantee that this number of bytes can actually be obtained from1076the system.1077*/1078DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();10791080/*1081malloc_set_footprint_limit();1082Sets the maximum number of bytes to obtain from the system, causing1083failure returns from malloc and related functions upon attempts to1084exceed this value. The argument value may be subject to page1085rounding to an enforceable limit; this actual value is returned.1086Using an argument of the maximum possible size_t effectively1087disables checks. If the argument is less than or equal to the1088current malloc_footprint, then all future allocations that require1089additional system memory will fail. However, invocation cannot1090retroactively deallocate existing used memory.1091*/1092DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);10931094#if MALLOC_INSPECT_ALL1095/*1096malloc_inspect_all(void(*handler)(void *start,1097void *end,1098size_t used_bytes,1099void* callback_arg),1100void* arg);1101Traverses the heap and calls the given handler for each managed1102region, skipping all bytes that are (or may be) used for bookkeeping1103purposes. Traversal does not include include chunks that have been1104directly memory mapped. Each reported region begins at the start1105address, and continues up to but not including the end address. The1106first used_bytes of the region contain allocated data. If1107used_bytes is zero, the region is unallocated. The handler is1108invoked with the given callback argument. If locks are defined, they1109are held during the entire traversal. It is a bad idea to invoke1110other malloc functions from within the handler.11111112For example, to count the number of in-use chunks with size greater1113than 1000, you could write:1114static int count = 0;1115void count_chunks(void* start, void* end, size_t used, void* arg) {1116if (used >= 1000) ++count;1117}1118then:1119malloc_inspect_all(count_chunks, NULL);11201121malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.1122*/1123DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),1124void* arg);11251126#endif /* MALLOC_INSPECT_ALL */11271128#if !NO_MALLINFO1129/*1130mallinfo()1131Returns (by copy) a struct containing various summary statistics:11321133arena: current total non-mmapped bytes allocated from system1134ordblks: the number of free chunks1135smblks: always zero.1136hblks: current number of mmapped regions1137hblkhd: total bytes held in mmapped regions1138usmblks: the maximum total allocated space. This will be greater1139than current total if trimming has occurred.1140fsmblks: always zero1141uordblks: current total allocated space (normal or mmapped)1142fordblks: total free space1143keepcost: the maximum number of bytes that could ideally be released1144back to system via malloc_trim. ("ideally" means that1145it ignores page restrictions etc.)11461147Because these fields are ints, but internal bookkeeping may1148be kept as longs, the reported values may wrap around zero and1149thus be inaccurate.1150*/1151DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);1152#endif /* NO_MALLINFO */11531154/*1155independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);11561157independent_calloc is similar to calloc, but instead of returning a1158single cleared space, it returns an array of pointers to n_elements1159independent elements that can hold contents of size elem_size, each1160of which starts out cleared, and can be independently freed,1161realloc'ed etc. The elements are guaranteed to be adjacently1162allocated (this is not guaranteed to occur with multiple callocs or1163mallocs), which may also improve cache locality in some1164applications.11651166The "chunks" argument is optional (i.e., may be null, which is1167probably the most typical usage). If it is null, the returned array1168is itself dynamically allocated and should also be freed when it is1169no longer needed. Otherwise, the chunks array must be of at least1170n_elements in length. It is filled in with the pointers to the1171chunks.11721173In either case, independent_calloc returns this pointer array, or1174null if the allocation failed. If n_elements is zero and "chunks"1175is null, it returns a chunk representing an array with zero elements1176(which should be freed if not wanted).11771178Each element must be freed when it is no longer needed. This can be1179done all at once using bulk_free.11801181independent_calloc simplifies and speeds up implementations of many1182kinds of pools. It may also be useful when constructing large data1183structures that initially have a fixed number of fixed-sized nodes,1184but the number is not known at compile time, and some of the nodes1185may later need to be freed. For example:11861187struct Node { int item; struct Node* next; };11881189struct Node* build_list() {1190struct Node** pool;1191int n = read_number_of_nodes_needed();1192if (n <= 0) return 0;1193pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);1194if (pool == 0) die();1195// organize into a linked list...1196struct Node* first = pool[0];1197for (i = 0; i < n-1; ++i)1198pool[i]->next = pool[i+1];1199free(pool); // Can now free the array (or not, if it is needed later)1200return first;1201}1202*/1203DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);12041205/*1206independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);12071208independent_comalloc allocates, all at once, a set of n_elements1209chunks with sizes indicated in the "sizes" array. It returns1210an array of pointers to these elements, each of which can be1211independently freed, realloc'ed etc. The elements are guaranteed to1212be adjacently allocated (this is not guaranteed to occur with1213multiple callocs or mallocs), which may also improve cache locality1214in some applications.12151216The "chunks" argument is optional (i.e., may be null). If it is null1217the returned array is itself dynamically allocated and should also1218be freed when it is no longer needed. Otherwise, the chunks array1219must be of at least n_elements in length. It is filled in with the1220pointers to the chunks.12211222In either case, independent_comalloc returns this pointer array, or1223null if the allocation failed. If n_elements is zero and chunks is1224null, it returns a chunk representing an array with zero elements1225(which should be freed if not wanted).12261227Each element must be freed when it is no longer needed. This can be1228done all at once using bulk_free.12291230independent_comallac differs from independent_calloc in that each1231element may have a different size, and also that it does not1232automatically clear elements.12331234independent_comalloc can be used to speed up allocation in cases1235where several structs or objects must always be allocated at the1236same time. For example:12371238struct Head { ... }1239struct Foot { ... }12401241void send_message(char* msg) {1242int msglen = strlen(msg);1243size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };1244void* chunks[3];1245if (independent_comalloc(3, sizes, chunks) == 0)1246die();1247struct Head* head = (struct Head*)(chunks[0]);1248char* body = (char*)(chunks[1]);1249struct Foot* foot = (struct Foot*)(chunks[2]);1250// ...1251}12521253In general though, independent_comalloc is worth using only for1254larger values of n_elements. For small values, you probably won't1255detect enough difference from series of malloc calls to bother.12561257Overuse of independent_comalloc can increase overall memory usage,1258since it cannot reuse existing noncontiguous small chunks that1259might be available for some of the elements.1260*/1261DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);12621263/*1264bulk_free(void* array[], size_t n_elements)1265Frees and clears (sets to null) each non-null pointer in the given1266array. This is likely to be faster than freeing them one-by-one.1267If footers are used, pointers that have been allocated in different1268mspaces are not freed or cleared, and the count of all such pointers1269is returned. For large arrays of pointers with poor locality, it1270may be worthwhile to sort this array before calling bulk_free.1271*/1272DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);12731274/*1275pvalloc(size_t n);1276Equivalent to valloc(minimum-page-that-holds(n)), that is,1277round up n to nearest pagesize.1278*/1279DLMALLOC_EXPORT void* dlpvalloc(size_t);12801281/*1282malloc_trim(size_t pad);12831284If possible, gives memory back to the system (via negative arguments1285to sbrk) if there is unused memory at the `high' end of the malloc1286pool or in unused MMAP segments. You can call this after freeing1287large blocks of memory to potentially reduce the system-level memory1288requirements of a program. However, it cannot guarantee to reduce1289memory. Under some allocation patterns, some large free blocks of1290memory will be locked between two used chunks, so they cannot be1291given back to the system.12921293The `pad' argument to malloc_trim represents the amount of free1294trailing space to leave untrimmed. If this argument is zero, only1295the minimum amount of memory to maintain internal data structures1296will be left. Non-zero arguments can be supplied to maintain enough1297trailing space to service future expected allocations without having1298to re-obtain memory from the system.12991300Malloc_trim returns 1 if it actually released any memory, else 0.1301*/1302DLMALLOC_EXPORT int dlmalloc_trim(size_t);13031304/*1305malloc_stats();1306Prints on stderr the amount of space obtained from the system (both1307via sbrk and mmap), the maximum amount (which may be more than1308current if malloc_trim and/or munmap got called), and the current1309number of bytes allocated via malloc (or realloc, etc) but not yet1310freed. Note that this is the number of bytes allocated, not the1311number requested. It will be larger than the number requested1312because of alignment and bookkeeping overhead. Because it includes1313alignment wastage as being in use, this figure may be greater than1314zero even when no user-level chunks are allocated.13151316The reported current and maximum system memory can be inaccurate if1317a program makes other calls to system memory allocation functions1318(normally sbrk) outside of malloc.13191320malloc_stats prints only the most commonly interesting statistics.1321More information can be obtained by calling mallinfo.1322*/1323DLMALLOC_EXPORT void dlmalloc_stats(void);13241325/*1326malloc_usable_size(void* p);13271328Returns the number of bytes you can actually use in1329an allocated chunk, which may be more than you requested (although1330often not) due to alignment and minimum size constraints.1331You can use this many bytes without worrying about1332overwriting other allocated objects. This is not a particularly great1333programming practice. malloc_usable_size can be more useful in1334debugging and assertions, for example:13351336p = malloc(n);1337assert(malloc_usable_size(p) >= 256);1338*/1339/* XXX EMSCRIPTEN: mark for export (and therefore weak) */1340DLMALLOC_EXPORT size_t dlmalloc_usable_size(void*);13411342#endif /* ONLY_MSPACES */13431344#if MSPACES13451346/*1347mspace is an opaque type representing an independent1348region of space that supports mspace_malloc, etc.1349*/1350typedef void* mspace;13511352/*1353create_mspace creates and returns a new independent space with the1354given initial capacity, or, if 0, the default granularity size. It1355returns null if there is no system memory available to create the1356space. If argument locked is non-zero, the space uses a separate1357lock to control access. The capacity of the space will grow1358dynamically as needed to service mspace_malloc requests. You can1359control the sizes of incremental increases of this space by1360compiling with a different DEFAULT_GRANULARITY or dynamically1361setting with mallopt(M_GRANULARITY, value).1362*/1363DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);13641365/*1366destroy_mspace destroys the given space, and attempts to return all1367of its memory back to the system, returning the total number of1368bytes freed. After destruction, the results of access to all memory1369used by the space become undefined.1370*/1371DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);13721373/*1374create_mspace_with_base uses the memory supplied as the initial base1375of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this1376space is used for bookkeeping, so the capacity must be at least this1377large. (Otherwise 0 is returned.) When this initial space is1378exhausted, additional memory will be obtained from the system.1379Destroying this space will deallocate all additionally allocated1380space (if possible) but not the initial base.1381*/1382DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);13831384/*1385mspace_track_large_chunks controls whether requests for large chunks1386are allocated in their own untracked mmapped regions, separate from1387others in this mspace. By default large chunks are not tracked,1388which reduces fragmentation. However, such chunks are not1389necessarily released to the system upon destroy_mspace. Enabling1390tracking by setting to true may increase fragmentation, but avoids1391leakage when relying on destroy_mspace to release all memory1392allocated using this space. The function returns the previous1393setting.1394*/1395DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);139613971398/*1399mspace_malloc behaves as malloc, but operates within1400the given space.1401*/1402DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);14031404/*1405mspace_free behaves as free, but operates within1406the given space.14071408If compiled with FOOTERS==1, mspace_free is not actually needed.1409free may be called instead of mspace_free because freed chunks from1410any space are handled by their originating spaces.1411*/1412DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);14131414/*1415mspace_realloc behaves as realloc, but operates within1416the given space.14171418If compiled with FOOTERS==1, mspace_realloc is not actually1419needed. realloc may be called instead of mspace_realloc because1420realloced chunks from any space are handled by their originating1421spaces.1422*/1423DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);14241425/*1426mspace_calloc behaves as calloc, but operates within1427the given space.1428*/1429DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);14301431/*1432mspace_memalign behaves as memalign, but operates within1433the given space.1434*/1435DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);14361437/*1438mspace_independent_calloc behaves as independent_calloc, but1439operates within the given space.1440*/1441DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,1442size_t elem_size, void* chunks[]);14431444/*1445mspace_independent_comalloc behaves as independent_comalloc, but1446operates within the given space.1447*/1448DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,1449size_t sizes[], void* chunks[]);14501451/*1452mspace_footprint() returns the number of bytes obtained from the1453system for this space.1454*/1455DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);14561457/*1458mspace_max_footprint() returns the peak number of bytes obtained from the1459system for this space.1460*/1461DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);146214631464#if !NO_MALLINFO1465/*1466mspace_mallinfo behaves as mallinfo, but reports properties of1467the given space.1468*/1469DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);1470#endif /* NO_MALLINFO */14711472/*1473malloc_usable_size(void* p) behaves the same as malloc_usable_size;1474*/1475DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);14761477/*1478mspace_malloc_stats behaves as malloc_stats, but reports1479properties of the given space.1480*/1481DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);14821483/*1484mspace_trim behaves as malloc_trim, but1485operates within the given space.1486*/1487DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);14881489/*1490An alias for mallopt.1491*/1492DLMALLOC_EXPORT int mspace_mallopt(int, int);14931494#endif /* MSPACES */14951496#ifdef __cplusplus1497} /* end of extern "C" */1498#endif /* __cplusplus */14991500/*1501========================================================================1502To make a fully customizable malloc.h header file, cut everything1503above this line, put into file malloc.h, edit to suit, and #include it1504on the next line, as well as in programs that use this malloc.1505========================================================================1506*/15071508/* #include "malloc.h" */15091510/*------------------------------ internal #includes ---------------------- */15111512#ifdef _MSC_VER1513#pragma warning( disable : 4146 ) /* no "unsigned" warnings */1514#endif /* _MSC_VER */1515#if !NO_MALLOC_STATS1516#include <stdio.h> /* for printing in malloc_stats */1517#endif /* NO_MALLOC_STATS */1518#ifndef LACKS_ERRNO_H1519#include <errno.h> /* for MALLOC_FAILURE_ACTION */1520#endif /* LACKS_ERRNO_H */1521#ifdef DEBUG1522#if ABORT_ON_ASSERT_FAILURE1523#undef assert1524#define assert(x) if(!(x)) ABORT1525#else /* ABORT_ON_ASSERT_FAILURE */1526#include <assert.h>1527#endif /* ABORT_ON_ASSERT_FAILURE */1528#else /* DEBUG */1529#ifndef assert1530#define assert(x)1531#endif1532#define DEBUG 01533#endif /* DEBUG */1534#if !defined(WIN32) && !defined(LACKS_TIME_H)1535#include <time.h> /* for magic initialization */1536#endif /* WIN32 */1537#ifndef LACKS_STDLIB_H1538#include <stdlib.h> /* for abort() */1539#endif /* LACKS_STDLIB_H */1540#ifndef LACKS_STRING_H1541#include <string.h> /* for memset etc */1542#endif /* LACKS_STRING_H */1543#if USE_BUILTIN_FFS1544#ifndef LACKS_STRINGS_H1545#include <strings.h> /* for ffs */1546#endif /* LACKS_STRINGS_H */1547#endif /* USE_BUILTIN_FFS */1548#if HAVE_MMAP1549#ifndef LACKS_SYS_MMAN_H1550/* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */1551#if (defined(linux) && !defined(__USE_GNU))1552#define __USE_GNU 11553#include <sys/mman.h> /* for mmap */1554#undef __USE_GNU1555#else1556#include <sys/mman.h> /* for mmap */1557#endif /* linux */1558#endif /* LACKS_SYS_MMAN_H */1559#ifndef LACKS_FCNTL_H1560#include <fcntl.h>1561#endif /* LACKS_FCNTL_H */1562#endif /* HAVE_MMAP */1563#ifndef LACKS_UNISTD_H1564#include <unistd.h> /* for sbrk, sysconf */1565#else /* LACKS_UNISTD_H */1566#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)1567extern void* sbrk(ptrdiff_t);1568#endif /* FreeBSD etc */1569#endif /* LACKS_UNISTD_H */15701571/* Declarations for locking */1572#if USE_LOCKS1573#ifndef WIN321574#if defined (__SVR4) && defined (__sun) /* solaris */1575#include <thread.h>1576#elif !defined(LACKS_SCHED_H)1577#include <sched.h>1578#endif /* solaris or LACKS_SCHED_H */1579#if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS1580#include <pthread.h>1581#endif /* USE_RECURSIVE_LOCKS ... */1582#elif defined(_MSC_VER)1583#ifndef _M_AMD641584/* These are already defined on AMD64 builds */1585#ifdef __cplusplus1586extern "C" {1587#endif /* __cplusplus */1588LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);1589LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);1590#ifdef __cplusplus1591}1592#endif /* __cplusplus */1593#endif /* _M_AMD64 */1594#pragma intrinsic (_InterlockedCompareExchange)1595#pragma intrinsic (_InterlockedExchange)1596#define interlockedcompareexchange _InterlockedCompareExchange1597#define interlockedexchange _InterlockedExchange1598#elif defined(WIN32) && defined(__GNUC__)1599#define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)1600#define interlockedexchange __sync_lock_test_and_set1601#endif /* Win32 */1602#else /* USE_LOCKS */1603#endif /* USE_LOCKS */16041605#ifndef LOCK_AT_FORK1606#define LOCK_AT_FORK 01607#endif16081609/* Declarations for bit scanning on win32 */1610#if defined(_MSC_VER) && _MSC_VER>=13001611#ifndef BitScanForward /* Try to avoid pulling in WinNT.h */1612#ifdef __cplusplus1613extern "C" {1614#endif /* __cplusplus */1615unsigned char _BitScanForward(unsigned long *index, unsigned long mask);1616unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);1617#ifdef __cplusplus1618}1619#endif /* __cplusplus */16201621#define BitScanForward _BitScanForward1622#define BitScanReverse _BitScanReverse1623#pragma intrinsic(_BitScanForward)1624#pragma intrinsic(_BitScanReverse)1625#endif /* BitScanForward */1626#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */16271628#ifndef WIN321629#ifndef malloc_getpagesize1630# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */1631# ifndef _SC_PAGE_SIZE1632# define _SC_PAGE_SIZE _SC_PAGESIZE1633# endif1634# endif1635# ifdef _SC_PAGE_SIZE1636# if defined(__EMSCRIPTEN__)1637# define malloc_getpagesize (4096) /* avoid sysconf calls during startup */1638# else1639# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)1640# endif1641# else1642# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)1643extern size_t getpagesize();1644# define malloc_getpagesize getpagesize()1645# else1646# ifdef WIN32 /* use supplied emulation of getpagesize */1647# define malloc_getpagesize getpagesize()1648# else1649# ifndef LACKS_SYS_PARAM_H1650# include <sys/param.h>1651# endif1652# ifdef EXEC_PAGESIZE1653# define malloc_getpagesize EXEC_PAGESIZE1654# else1655# ifdef NBPG1656# ifndef CLSIZE1657# define malloc_getpagesize NBPG1658# else1659# define malloc_getpagesize (NBPG * CLSIZE)1660# endif1661# else1662# ifdef NBPC1663# define malloc_getpagesize NBPC1664# else1665# ifdef PAGESIZE1666# define malloc_getpagesize PAGESIZE1667# else /* just guess */1668# define malloc_getpagesize ((size_t)4096U)1669# endif1670# endif1671# endif1672# endif1673# endif1674# endif1675# endif1676#endif1677#endif16781679/* ------------------- size_t and alignment properties -------------------- */16801681/* The byte and bit size of a size_t */1682#define SIZE_T_SIZE (sizeof(size_t))1683#define SIZE_T_BITSIZE (sizeof(size_t) << 3)16841685/* Some constants coerced to size_t */1686/* Annoying but necessary to avoid errors on some platforms */1687#define SIZE_T_ZERO ((size_t)0)1688#define SIZE_T_ONE ((size_t)1)1689#define SIZE_T_TWO ((size_t)2)1690#define SIZE_T_FOUR ((size_t)4)1691#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)1692#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)1693#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)1694#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)16951696/* The bit mask value corresponding to MALLOC_ALIGNMENT */1697#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)16981699/* True if address a has acceptable alignment */1700#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)17011702/* the number of bytes to offset an address to align it */1703#define align_offset(A)\1704((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\1705((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))17061707/* -------------------------- MMAP preliminaries ------------------------- */17081709/*1710If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and1711checks to fail so compiler optimizer can delete code rather than1712using so many "#if"s.1713*/171417151716/* MORECORE and MMAP must return MFAIL on failure */1717#define MFAIL ((void*)(MAX_SIZE_T))1718#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */17191720#if HAVE_MMAP17211722#ifndef WIN321723#define MUNMAP_DEFAULT(a, s) munmap((a), (s))1724#define MMAP_PROT (PROT_READ|PROT_WRITE)1725#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)1726#define MAP_ANONYMOUS MAP_ANON1727#endif /* MAP_ANON */1728#ifdef MAP_ANONYMOUS1729#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)1730#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)1731#else /* MAP_ANONYMOUS */1732/*1733Nearly all versions of mmap support MAP_ANONYMOUS, so the following1734is unlikely to be needed, but is supplied just in case.1735*/1736#define MMAP_FLAGS (MAP_PRIVATE)1737static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */1738#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \1739(dev_zero_fd = open("/dev/zero", O_RDWR), \1740mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \1741mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))1742#endif /* MAP_ANONYMOUS */17431744#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)17451746#else /* WIN32 */17471748/* Win32 MMAP via VirtualAlloc */1749static FORCEINLINE void* win32mmap(size_t size) {1750void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);1751return (ptr != 0)? ptr: MFAIL;1752}17531754/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */1755static FORCEINLINE void* win32direct_mmap(size_t size) {1756void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,1757PAGE_READWRITE);1758return (ptr != 0)? ptr: MFAIL;1759}17601761/* This function supports releasing coalesed segments */1762static FORCEINLINE int win32munmap(void* ptr, size_t size) {1763MEMORY_BASIC_INFORMATION minfo;1764char* cptr = (char*)ptr;1765while (size) {1766if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)1767return -1;1768if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||1769minfo.State != MEM_COMMIT || minfo.RegionSize > size)1770return -1;1771if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)1772return -1;1773cptr += minfo.RegionSize;1774size -= minfo.RegionSize;1775}1776return 0;1777}17781779#define MMAP_DEFAULT(s) win32mmap(s)1780#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))1781#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)1782#endif /* WIN32 */1783#endif /* HAVE_MMAP */17841785#if HAVE_MREMAP1786#ifndef WIN321787#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))1788#endif /* WIN32 */1789#endif /* HAVE_MREMAP */17901791/**1792* Define CALL_MORECORE1793*/1794#if HAVE_MORECORE1795#ifdef MORECORE1796#define CALL_MORECORE(S) MORECORE(S)1797#else /* MORECORE */1798#define CALL_MORECORE(S) MORECORE_DEFAULT(S)1799#endif /* MORECORE */1800#else /* HAVE_MORECORE */1801#define CALL_MORECORE(S) MFAIL1802#endif /* HAVE_MORECORE */18031804/**1805* Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP1806*/1807#if HAVE_MMAP1808#define USE_MMAP_BIT (SIZE_T_ONE)18091810#ifdef MMAP1811#define CALL_MMAP(s) MMAP(s)1812#else /* MMAP */1813#define CALL_MMAP(s) MMAP_DEFAULT(s)1814#endif /* MMAP */1815#ifdef MUNMAP1816#define CALL_MUNMAP(a, s) MUNMAP((a), (s))1817#else /* MUNMAP */1818#define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))1819#endif /* MUNMAP */1820#ifdef DIRECT_MMAP1821#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)1822#else /* DIRECT_MMAP */1823#define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)1824#endif /* DIRECT_MMAP */1825#else /* HAVE_MMAP */1826#define USE_MMAP_BIT (SIZE_T_ZERO)18271828#define MMAP(s) MFAIL1829#define MUNMAP(a, s) (-1)1830#define DIRECT_MMAP(s) MFAIL1831#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)1832#define CALL_MMAP(s) MMAP(s)1833#define CALL_MUNMAP(a, s) MUNMAP((a), (s))1834#endif /* HAVE_MMAP */18351836/**1837* Define CALL_MREMAP1838*/1839#if HAVE_MMAP && HAVE_MREMAP1840#ifdef MREMAP1841#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))1842#else /* MREMAP */1843#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))1844#endif /* MREMAP */1845#else /* HAVE_MMAP && HAVE_MREMAP */1846#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL1847#endif /* HAVE_MMAP && HAVE_MREMAP */18481849/* mstate bit set if continguous morecore disabled or failed */1850#define USE_NONCONTIGUOUS_BIT (4U)18511852/* segment bit set in create_mspace_with_base */1853#define EXTERN_BIT (8U)185418551856/* --------------------------- Lock preliminaries ------------------------ */18571858/*1859When locks are defined, there is one global lock, plus1860one per-mspace lock.18611862The global lock_ensures that mparams.magic and other unique1863mparams values are initialized only once. It also protects1864sequences of calls to MORECORE. In many cases sys_alloc requires1865two calls, that should not be interleaved with calls by other1866threads. This does not protect against direct calls to MORECORE1867by other threads not using this lock, so there is still code to1868cope the best we can on interference.18691870Per-mspace locks surround calls to malloc, free, etc.1871By default, locks are simple non-reentrant mutexes.18721873Because lock-protected regions generally have bounded times, it is1874OK to use the supplied simple spinlocks. Spinlocks are likely to1875improve performance for lightly contended applications, but worsen1876performance under heavy contention.18771878If USE_LOCKS is > 1, the definitions of lock routines here are1879bypassed, in which case you will need to define the type MLOCK_T,1880and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK1881and TRY_LOCK. You must also declare a1882static MLOCK_T malloc_global_mutex = { initialization values };.18831884*/18851886#if !USE_LOCKS1887#define USE_LOCK_BIT (0U)1888#define INITIAL_LOCK(l) (0)1889#define DESTROY_LOCK(l) (0)1890#define ACQUIRE_MALLOC_GLOBAL_LOCK()1891#define RELEASE_MALLOC_GLOBAL_LOCK()18921893#else1894#if USE_LOCKS > 11895/* ----------------------- User-defined locks ------------------------ */1896/* Define your own lock implementation here */1897/* #define INITIAL_LOCK(lk) ... */1898/* #define DESTROY_LOCK(lk) ... */1899/* #define ACQUIRE_LOCK(lk) ... */1900/* #define RELEASE_LOCK(lk) ... */1901/* #define TRY_LOCK(lk) ... */1902/* static MLOCK_T malloc_global_mutex = ... */19031904#elif USE_SPIN_LOCKS19051906/* First, define CAS_LOCK and CLEAR_LOCK on ints */1907/* Note CAS_LOCK defined to return 0 on success */19081909#if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))1910#define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)1911#define CLEAR_LOCK(sl) __sync_lock_release(sl)19121913#elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))1914/* Custom spin locks for older gcc on x86 */1915static FORCEINLINE int x86_cas_lock(int *sl) {1916int ret;1917int val = 1;1918int cmp = 0;1919__asm__ __volatile__ ("lock; cmpxchgl %1, %2"1920: "=a" (ret)1921: "r" (val), "m" (*(sl)), "0"(cmp)1922: "memory", "cc");1923return ret;1924}19251926static FORCEINLINE void x86_clear_lock(int* sl) {1927assert(*sl != 0);1928int prev = 0;1929int ret;1930__asm__ __volatile__ ("lock; xchgl %0, %1"1931: "=r" (ret)1932: "m" (*(sl)), "0"(prev)1933: "memory");1934}19351936#define CAS_LOCK(sl) x86_cas_lock(sl)1937#define CLEAR_LOCK(sl) x86_clear_lock(sl)19381939#else /* Win32 MSC */1940#define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)1941#define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)19421943#endif /* ... gcc spins locks ... */19441945/* How to yield for a spin lock */1946#define SPINS_PER_YIELD 631947#if defined(_MSC_VER)1948#define SLEEP_EX_DURATION 50 /* delay for yield/sleep */1949#define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)1950#elif defined (__SVR4) && defined (__sun) /* solaris */1951#define SPIN_LOCK_YIELD thr_yield();1952#elif !defined(LACKS_SCHED_H)1953#define SPIN_LOCK_YIELD sched_yield();1954#else1955#define SPIN_LOCK_YIELD1956#endif /* ... yield ... */19571958#if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 01959/* Plain spin locks use single word (embedded in malloc_states) */1960static int spin_acquire_lock(int *sl) {1961int spins = 0;1962while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {1963if ((++spins & SPINS_PER_YIELD) == 0) {1964SPIN_LOCK_YIELD;1965}1966}1967return 0;1968}19691970#define MLOCK_T int1971#define TRY_LOCK(sl) !CAS_LOCK(sl)1972#define RELEASE_LOCK(sl) CLEAR_LOCK(sl)1973#define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)1974#define INITIAL_LOCK(sl) (*sl = 0)1975#define DESTROY_LOCK(sl) (0)1976static MLOCK_T malloc_global_mutex = 0;19771978#else /* USE_RECURSIVE_LOCKS */1979/* types for lock owners */1980#ifdef WIN321981#define THREAD_ID_T DWORD1982#define CURRENT_THREAD GetCurrentThreadId()1983#define EQ_OWNER(X,Y) ((X) == (Y))1984#else1985/*1986Note: the following assume that pthread_t is a type that can be1987initialized to (casted) zero. If this is not the case, you will need to1988somehow redefine these or not use spin locks.1989*/1990#define THREAD_ID_T pthread_t1991#define CURRENT_THREAD pthread_self()1992#define EQ_OWNER(X,Y) pthread_equal(X, Y)1993#endif19941995struct malloc_recursive_lock {1996int sl;1997unsigned int c;1998THREAD_ID_T threadid;1999};20002001#define MLOCK_T struct malloc_recursive_lock2002static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};20032004static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {2005assert(lk->sl != 0);2006if (--lk->c == 0) {2007CLEAR_LOCK(&lk->sl);2008}2009}20102011static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {2012THREAD_ID_T mythreadid = CURRENT_THREAD;2013int spins = 0;2014for (;;) {2015if (*((volatile int *)(&lk->sl)) == 0) {2016if (!CAS_LOCK(&lk->sl)) {2017lk->threadid = mythreadid;2018lk->c = 1;2019return 0;2020}2021}2022else if (EQ_OWNER(lk->threadid, mythreadid)) {2023++lk->c;2024return 0;2025}2026if ((++spins & SPINS_PER_YIELD) == 0) {2027SPIN_LOCK_YIELD;2028}2029}2030}20312032static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {2033THREAD_ID_T mythreadid = CURRENT_THREAD;2034if (*((volatile int *)(&lk->sl)) == 0) {2035if (!CAS_LOCK(&lk->sl)) {2036lk->threadid = mythreadid;2037lk->c = 1;2038return 1;2039}2040}2041else if (EQ_OWNER(lk->threadid, mythreadid)) {2042++lk->c;2043return 1;2044}2045return 0;2046}20472048#define RELEASE_LOCK(lk) recursive_release_lock(lk)2049#define TRY_LOCK(lk) recursive_try_lock(lk)2050#define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)2051#define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)2052#define DESTROY_LOCK(lk) (0)2053#endif /* USE_RECURSIVE_LOCKS */20542055#elif defined(WIN32) /* Win32 critical sections */2056#define MLOCK_T CRITICAL_SECTION2057#define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)2058#define RELEASE_LOCK(lk) LeaveCriticalSection(lk)2059#define TRY_LOCK(lk) TryEnterCriticalSection(lk)2060#define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))2061#define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)2062#define NEED_GLOBAL_LOCK_INIT20632064static MLOCK_T malloc_global_mutex;2065static volatile LONG malloc_global_mutex_status;20662067/* Use spin loop to initialize global lock */2068static void init_malloc_global_mutex() {2069for (;;) {2070long stat = malloc_global_mutex_status;2071if (stat > 0)2072return;2073/* transition to < 0 while initializing, then to > 0) */2074if (stat == 0 &&2075interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {2076InitializeCriticalSection(&malloc_global_mutex);2077interlockedexchange(&malloc_global_mutex_status, (LONG)1);2078return;2079}2080SleepEx(0, FALSE);2081}2082}20832084#else /* pthreads-based locks */20852086#define MLOCK_T pthread_mutex_t2087#define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)2088#define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)2089#define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))2090#define INITIAL_LOCK(lk) pthread_init_lock(lk)2091#define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)20922093#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)2094/* Cope with old-style linux recursive lock initialization by adding */2095/* skipped internal declaration from pthread.h */2096extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,2097int __kind));2098#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP2099#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)2100#endif /* USE_RECURSIVE_LOCKS ... */21012102static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;21032104static int pthread_init_lock (MLOCK_T *lk) {2105pthread_mutexattr_t attr;2106if (pthread_mutexattr_init(&attr)) return 1;2107#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 02108if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;2109#endif2110if (pthread_mutex_init(lk, &attr)) return 1;2111if (pthread_mutexattr_destroy(&attr)) return 1;2112return 0;2113}21142115#endif /* ... lock types ... */21162117/* Common code for all lock types */2118#define USE_LOCK_BIT (2U)21192120#ifndef ACQUIRE_MALLOC_GLOBAL_LOCK2121#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);2122#endif21232124#ifndef RELEASE_MALLOC_GLOBAL_LOCK2125#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);2126#endif21272128#endif /* USE_LOCKS */21292130/* ----------------------- Chunk representations ------------------------ */21312132/*2133(The following includes lightly edited explanations by Colin Plumb.)21342135The malloc_chunk declaration below is misleading (but accurate and2136necessary). It declares a "view" into memory allowing access to2137necessary fields at known offsets from a given base.21382139Chunks of memory are maintained using a `boundary tag' method as2140originally described by Knuth. (See the paper by Paul Wilson2141ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such2142techniques.) Sizes of free chunks are stored both in the front of2143each chunk and at the end. This makes consolidating fragmented2144chunks into bigger chunks fast. The head fields also hold bits2145representing whether chunks are free or in use.21462147Here are some pictures to make it clearer. They are "exploded" to2148show that the state of a chunk can be thought of as extending from2149the high 31 bits of the head field of its header through the2150prev_foot and PINUSE_BIT bit of the following chunk header.21512152A chunk that's in use looks like:21532154chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2155| Size of previous chunk (if P = 0) |2156+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2157+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|2158| Size of this chunk 1| +-+2159mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2160| |2161+- -+2162| |2163+- -+2164| :2165+- size - sizeof(size_t) available payload bytes -+2166: |2167chunk-> +- -+2168| |2169+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2170+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|2171| Size of next chunk (may or may not be in use) | +-+2172mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+21732174And if it's free, it looks like this:21752176chunk-> +- -+2177| User payload (must be in use, or we would have merged!) |2178+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2179+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|2180| Size of this chunk 0| +-+2181mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2182| Next pointer |2183+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2184| Prev pointer |2185+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2186| :2187+- size - sizeof(struct chunk) unused bytes -+2188: |2189chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2190| Size of this chunk |2191+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2192+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|2193| Size of next chunk (must be in use, or we would have merged)| +-+2194mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2195| :2196+- User payload -+2197: |2198+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2199|0|2200+-+2201Note that since we always merge adjacent free chunks, the chunks2202adjacent to a free chunk must be in use.22032204Given a pointer to a chunk (which can be derived trivially from the2205payload pointer) we can, in O(1) time, find out whether the adjacent2206chunks are free, and if so, unlink them from the lists that they2207are on and merge them with the current chunk.22082209Chunks always begin on even word boundaries, so the mem portion2210(which is returned to the user) is also on an even word boundary, and2211thus at least double-word aligned.22122213The P (PINUSE_BIT) bit, stored in the unused low-order bit of the2214chunk size (which is always a multiple of two words), is an in-use2215bit for the *previous* chunk. If that bit is *clear*, then the2216word before the current chunk size contains the previous chunk2217size, and can be used to find the front of the previous chunk.2218The very first chunk allocated always has this bit set, preventing2219access to non-existent (or non-owned) memory. If pinuse is set for2220any given chunk, then you CANNOT determine the size of the2221previous chunk, and might even get a memory addressing fault when2222trying to do so.22232224The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of2225the chunk size redundantly records whether the current chunk is2226inuse (unless the chunk is mmapped). This redundancy enables usage2227checks within free and realloc, and reduces indirection when freeing2228and consolidating chunks.22292230Each freshly allocated chunk must have both cinuse and pinuse set.2231That is, each allocated chunk borders either a previously allocated2232and still in-use chunk, or the base of its memory arena. This is2233ensured by making all allocations from the `lowest' part of any2234found chunk. Further, no free chunk physically borders another one,2235so each free chunk is known to be preceded and followed by either2236inuse chunks or the ends of memory.22372238Note that the `foot' of the current chunk is actually represented2239as the prev_foot of the NEXT chunk. This makes it easier to2240deal with alignments etc but can be very confusing when trying2241to extend or adapt this code.22422243The exceptions to all this are224422451. The special chunk `top' is the top-most available chunk (i.e.,2246the one bordering the end of available memory). It is treated2247specially. Top is never included in any bin, is used only if2248no other chunk is available, and is released back to the2249system if it is very large (see M_TRIM_THRESHOLD). In effect,2250the top chunk is treated as larger (and thus less well2251fitting) than any other available chunk. The top chunk2252doesn't update its trailing size field since there is no next2253contiguous chunk that would have to index off it. However,2254space is still allocated for it (TOP_FOOT_SIZE) to enable2255separation or merging when space is extended.225622573. Chunks allocated via mmap, have both cinuse and pinuse bits2258cleared in their head fields. Because they are allocated2259one-by-one, each must carry its own prev_foot field, which is2260also used to hold the offset this chunk has within its mmapped2261region, which is needed to preserve alignment. Each mmapped2262chunk is trailed by the first two fields of a fake next-chunk2263for sake of usage checks.22642265*/22662267struct malloc_chunk {2268size_t prev_foot; /* Size of previous chunk (if free). */2269size_t head; /* Size and inuse bits. */2270struct malloc_chunk* fd; /* double links -- used only if free. */2271struct malloc_chunk* bk;2272};22732274typedef struct malloc_chunk mchunk;2275typedef struct malloc_chunk* mchunkptr;2276typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */2277typedef unsigned int bindex_t; /* Described below */2278typedef unsigned int binmap_t; /* Described below */2279typedef unsigned int flag_t; /* The type of various bit flag sets */22802281/* ------------------- Chunks sizes and alignments ----------------------- */22822283#define MCHUNK_SIZE (sizeof(mchunk))22842285#if FOOTERS2286#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)2287#else /* FOOTERS */2288#define CHUNK_OVERHEAD (SIZE_T_SIZE)2289#endif /* FOOTERS */22902291/* MMapped chunks need a second word of overhead ... */2292#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)2293/* ... and additional padding for fake next-chunk at foot */2294#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)22952296/* The smallest size we can malloc is an aligned minimal chunk */2297#define MIN_CHUNK_SIZE \2298((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)22992300/* conversion from malloc headers to user pointers, and back */2301#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))2302#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))2303/* chunk associated with aligned address A */2304#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))23052306/* Bounds on request (not chunk) sizes. */2307#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)2308#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)23092310/* pad request bytes into a usable size */2311#define pad_request(req) \2312(((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)23132314/* pad request, checking for minimum (but not maximum) */2315#define request2size(req) \2316(((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))231723182319/* ------------------ Operations on head and foot fields ----------------- */23202321/*2322The head field of a chunk is or'ed with PINUSE_BIT when previous2323adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in2324use, unless mmapped, in which case both bits are cleared.23252326FLAG4_BIT is not used by this malloc, but might be useful in extensions.2327*/23282329#define PINUSE_BIT (SIZE_T_ONE)2330#define CINUSE_BIT (SIZE_T_TWO)2331#define FLAG4_BIT (SIZE_T_FOUR)2332#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)2333#define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)23342335/* Head value for fenceposts */2336#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)23372338/* extraction of fields from head words */2339#define cinuse(p) ((p)->head & CINUSE_BIT)2340#define pinuse(p) ((p)->head & PINUSE_BIT)2341#define flag4inuse(p) ((p)->head & FLAG4_BIT)2342#define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)2343#define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)23442345#define chunksize(p) ((p)->head & ~(FLAG_BITS))23462347#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)2348#define set_flag4(p) ((p)->head |= FLAG4_BIT)2349#define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)23502351/* Treat space at ptr +/- offset as a chunk */2352#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))2353#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))23542355/* Ptr to next or previous physical malloc_chunk. */2356#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))2357#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))23582359/* extract next chunk's pinuse bit */2360#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)23612362/* Get/set size at footer */2363#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)2364#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))23652366/* Set size, pinuse bit, and foot */2367#define set_size_and_pinuse_of_free_chunk(p, s)\2368((p)->head = (s|PINUSE_BIT), set_foot(p, s))23692370/* Set size, pinuse bit, foot, and clear next pinuse */2371#define set_free_with_pinuse(p, s, n)\2372(clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))23732374/* Get the internal overhead associated with chunk p */2375#define overhead_for(p)\2376(is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)23772378/* Return true if malloced space is not necessarily cleared */2379#if MMAP_CLEARS2380#define calloc_must_clear(p) (!is_mmapped(p))2381#else /* MMAP_CLEARS */2382#define calloc_must_clear(p) (1)2383#endif /* MMAP_CLEARS */23842385/* ---------------------- Overlaid data structures ----------------------- */23862387/*2388When chunks are not in use, they are treated as nodes of either2389lists or trees.23902391"Small" chunks are stored in circular doubly-linked lists, and look2392like this:23932394chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2395| Size of previous chunk |2396+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2397`head:' | Size of chunk, in bytes |P|2398mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2399| Forward pointer to next chunk in list |2400+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2401| Back pointer to previous chunk in list |2402+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2403| Unused space (may be 0 bytes long) .2404. .2405. |2406nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2407`foot:' | Size of chunk, in bytes |2408+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+24092410Larger chunks are kept in a form of bitwise digital trees (aka2411tries) keyed on chunksizes. Because malloc_tree_chunks are only for2412free chunks greater than 256 bytes, their size doesn't impose any2413constraints on user chunk sizes. Each node looks like:24142415chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2416| Size of previous chunk |2417+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2418`head:' | Size of chunk, in bytes |P|2419mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2420| Forward pointer to next chunk of same size |2421+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2422| Back pointer to previous chunk of same size |2423+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2424| Pointer to left child (child[0]) |2425+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2426| Pointer to right child (child[1]) |2427+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2428| Pointer to parent |2429+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2430| bin index of this chunk |2431+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2432| Unused space .2433. |2434nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+2435`foot:' | Size of chunk, in bytes |2436+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+24372438Each tree holding treenodes is a tree of unique chunk sizes. Chunks2439of the same size are arranged in a circularly-linked list, with only2440the oldest chunk (the next to be used, in our FIFO ordering)2441actually in the tree. (Tree members are distinguished by a non-null2442parent pointer.) If a chunk with the same size an an existing node2443is inserted, it is linked off the existing node using pointers that2444work in the same way as fd/bk pointers of small chunks.24452446Each tree contains a power of 2 sized range of chunk sizes (the2447smallest is 0x100 <= x < 0x180), which is is divided in half at each2448tree level, with the chunks in the smaller half of the range (0x1002449<= x < 0x140 for the top nose) in the left subtree and the larger2450half (0x140 <= x < 0x180) in the right subtree. This is, of course,2451done by inspecting individual bits.24522453Using these rules, each node's left subtree contains all smaller2454sizes than its right subtree. However, the node at the root of each2455subtree has no particular ordering relationship to either. (The2456dividing line between the subtree sizes is based on trie relation.)2457If we remove the last chunk of a given size from the interior of the2458tree, we need to replace it with a leaf node. The tree ordering2459rules permit a node to be replaced by any leaf below it.24602461The smallest chunk in a tree (a common operation in a best-fit2462allocator) can be found by walking a path to the leftmost leaf in2463the tree. Unlike a usual binary tree, where we follow left child2464pointers until we reach a null, here we follow the right child2465pointer any time the left one is null, until we reach a leaf with2466both child pointers null. The smallest chunk in the tree will be2467somewhere along that path.24682469The worst case number of steps to add, find, or remove a node is2470bounded by the number of bits differentiating chunks within2471bins. Under current bin calculations, this ranges from 6 up to 212472(for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case2473is of course much better.2474*/24752476struct malloc_tree_chunk {2477/* The first four fields must be compatible with malloc_chunk */2478size_t prev_foot;2479size_t head;2480struct malloc_tree_chunk* fd;2481struct malloc_tree_chunk* bk;24822483struct malloc_tree_chunk* child[2];2484struct malloc_tree_chunk* parent;2485bindex_t index;2486};24872488typedef struct malloc_tree_chunk tchunk;2489typedef struct malloc_tree_chunk* tchunkptr;2490typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */24912492/* A little helper macro for trees */2493#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])24942495/* ----------------------------- Segments -------------------------------- */24962497/*2498Each malloc space may include non-contiguous segments, held in a2499list headed by an embedded malloc_segment record representing the2500top-most space. Segments also include flags holding properties of2501the space. Large chunks that are directly allocated by mmap are not2502included in this list. They are instead independently created and2503destroyed without otherwise keeping track of them.25042505Segment management mainly comes into play for spaces allocated by2506MMAP. Any call to MMAP might or might not return memory that is2507adjacent to an existing segment. MORECORE normally contiguously2508extends the current space, so this space is almost always adjacent,2509which is simpler and faster to deal with. (This is why MORECORE is2510used preferentially to MMAP when both are available -- see2511sys_alloc.) When allocating using MMAP, we don't use any of the2512hinting mechanisms (inconsistently) supported in various2513implementations of unix mmap, or distinguish reserving from2514committing memory. Instead, we just ask for space, and exploit2515contiguity when we get it. It is probably possible to do2516better than this on some systems, but no general scheme seems2517to be significantly better.25182519Management entails a simpler variant of the consolidation scheme2520used for chunks to reduce fragmentation -- new adjacent memory is2521normally prepended or appended to an existing segment. However,2522there are limitations compared to chunk consolidation that mostly2523reflect the fact that segment processing is relatively infrequent2524(occurring only when getting memory from system) and that we2525don't expect to have huge numbers of segments:25262527* Segments are not indexed, so traversal requires linear scans. (It2528would be possible to index these, but is not worth the extra2529overhead and complexity for most programs on most platforms.)2530* New segments are only appended to old ones when holding top-most2531memory; if they cannot be prepended to others, they are held in2532different segments.25332534Except for the top-most segment of an mstate, each segment record2535is kept at the tail of its segment. Segments are added by pushing2536segment records onto the list headed by &mstate.seg for the2537containing mstate.25382539Segment flags control allocation/merge/deallocation policies:2540* If EXTERN_BIT set, then we did not allocate this segment,2541and so should not try to deallocate or merge with others.2542(This currently holds only for the initial segment passed2543into create_mspace_with_base.)2544* If USE_MMAP_BIT set, the segment may be merged with2545other surrounding mmapped segments and trimmed/de-allocated2546using munmap.2547* If neither bit is set, then the segment was obtained using2548MORECORE so can be merged with surrounding MORECORE'd segments2549and deallocated/trimmed using MORECORE with negative arguments.2550*/25512552struct malloc_segment {2553char* base; /* base address */2554size_t size; /* allocated size */2555struct malloc_segment* next; /* ptr to next segment */2556flag_t sflags; /* mmap and extern flag */2557};25582559#define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)2560#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)25612562typedef struct malloc_segment msegment;2563typedef struct malloc_segment* msegmentptr;25642565/* ---------------------------- malloc_state ----------------------------- */25662567/*2568A malloc_state holds all of the bookkeeping for a space.2569The main fields are:25702571Top2572The topmost chunk of the currently active segment. Its size is2573cached in topsize. The actual size of topmost space is2574topsize+TOP_FOOT_SIZE, which includes space reserved for adding2575fenceposts and segment records if necessary when getting more2576space from the system. The size at which to autotrim top is2577cached from mparams in trim_check, except that it is disabled if2578an autotrim fails.25792580Designated victim (dv)2581This is the preferred chunk for servicing small requests that2582don't have exact fits. It is normally the chunk split off most2583recently to service another small request. Its size is cached in2584dvsize. The link fields of this chunk are not maintained since it2585is not kept in a bin.25862587SmallBins2588An array of bin headers for free chunks. These bins hold chunks2589with sizes less than MIN_LARGE_SIZE bytes. Each bin contains2590chunks of all the same size, spaced 8 bytes apart. To simplify2591use in double-linked lists, each bin header acts as a malloc_chunk2592pointing to the real first node, if it exists (else pointing to2593itself). This avoids special-casing for headers. But to avoid2594waste, we allocate only the fd/bk pointers of bins, and then use2595repositioning tricks to treat these as the fields of a chunk.25962597TreeBins2598Treebins are pointers to the roots of trees holding a range of2599sizes. There are 2 equally spaced treebins for each power of two2600from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything2601larger.26022603Bin maps2604There is one bit map for small bins ("smallmap") and one for2605treebins ("treemap). Each bin sets its bit when non-empty, and2606clears the bit when empty. Bit operations are then used to avoid2607bin-by-bin searching -- nearly all "search" is done without ever2608looking at bins that won't be selected. The bit maps2609conservatively use 32 bits per map word, even if on 64bit system.2610For a good description of some of the bit-based techniques used2611here, see Henry S. Warren Jr's book "Hacker's Delight" (and2612supplement at http://hackersdelight.org/). Many of these are2613intended to reduce the branchiness of paths through malloc etc, as2614well as to reduce the number of memory locations read or written.26152616Segments2617A list of segments headed by an embedded malloc_segment record2618representing the initial space.26192620Address check support2621The least_addr field is the least address ever obtained from2622MORECORE or MMAP. Attempted frees and reallocs of any address less2623than this are trapped (unless INSECURE is defined).26242625Magic tag2626A cross-check field that should always hold same value as mparams.magic.26272628Max allowed footprint2629The maximum allowed bytes to allocate from system (zero means no limit)26302631Flags2632Bits recording whether to use MMAP, locks, or contiguous MORECORE26332634Statistics2635Each space keeps track of current and maximum system memory2636obtained via MORECORE or MMAP.26372638Trim support2639Fields holding the amount of unused topmost memory that should trigger2640trimming, and a counter to force periodic scanning to release unused2641non-topmost segments.26422643Locking2644If USE_LOCKS is defined, the "mutex" lock is acquired and released2645around every public call using this mspace.26462647Extension support2648A void* pointer and a size_t field that can be used to help implement2649extensions to this malloc.2650*/26512652/* Bin types, widths and sizes */2653#define NSMALLBINS (32U)2654#define NTREEBINS (32U)2655#define SMALLBIN_SHIFT (3U)2656#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)2657#define TREEBIN_SHIFT (8U)2658#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)2659#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)2660#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)26612662struct malloc_state {2663binmap_t smallmap;2664binmap_t treemap;2665size_t dvsize;2666size_t topsize;2667char* least_addr;2668mchunkptr dv;2669mchunkptr top;2670size_t trim_check;2671size_t release_checks;2672size_t magic;2673mchunkptr smallbins[(NSMALLBINS+1)*2];2674tbinptr treebins[NTREEBINS];2675size_t footprint;2676size_t max_footprint;2677size_t footprint_limit; /* zero means no limit */2678flag_t mflags;2679#if USE_LOCKS2680MLOCK_T mutex; /* locate lock among fields that rarely change */2681#endif /* USE_LOCKS */2682msegment seg;2683void* extp; /* Unused but available for extensions */2684size_t exts;2685};26862687typedef struct malloc_state* mstate;26882689/* ------------- Global malloc_state and malloc_params ------------------- */26902691/*2692malloc_params holds global properties, including those that can be2693dynamically set using mallopt. There is a single instance, mparams,2694initialized in init_mparams. Note that the non-zeroness of "magic"2695also serves as an initialization flag.2696*/26972698struct malloc_params {2699size_t magic;2700size_t page_size;2701size_t granularity;2702size_t mmap_threshold;2703size_t trim_threshold;2704flag_t default_mflags;2705};27062707static struct malloc_params mparams;27082709/* Ensure mparams initialized */2710#define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())27112712#if !ONLY_MSPACES27132714/* The global malloc_state used for all non-"mspace" calls */2715static struct malloc_state _gm_;2716#define gm (&_gm_)2717#define is_global(M) ((M) == &_gm_)27182719#endif /* !ONLY_MSPACES */27202721#define is_initialized(M) ((M)->top != 0)27222723/* -------------------------- system alloc setup ------------------------- */27242725/* Operations on mflags */27262727#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)2728#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)2729#if USE_LOCKS2730#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)2731#else2732#define disable_lock(M)2733#endif27342735#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)2736#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)2737#if HAVE_MMAP2738#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)2739#else2740#define disable_mmap(M)2741#endif27422743#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)2744#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)27452746#define set_lock(M,L)\2747((M)->mflags = (L)?\2748((M)->mflags | USE_LOCK_BIT) :\2749((M)->mflags & ~USE_LOCK_BIT))27502751/* page-align a size */2752#define page_align(S)\2753(((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))27542755/* granularity-align a size */2756#define granularity_align(S)\2757(((S) + (mparams.granularity - SIZE_T_ONE))\2758& ~(mparams.granularity - SIZE_T_ONE))275927602761/* For mmap, use granularity alignment on windows, else page-align */2762#ifdef WIN322763#define mmap_align(S) granularity_align(S)2764#else2765#define mmap_align(S) page_align(S)2766#endif27672768/* For sys_alloc, enough padding to ensure can malloc request on success */2769#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)27702771#define is_page_aligned(S)\2772(((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)2773#define is_granularity_aligned(S)\2774(((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)27752776/* True if segment S holds address A */2777#define segment_holds(S, A)\2778((char*)(A) >= S->base && (char*)(A) < S->base + S->size)27792780/* Return segment holding given address */2781static msegmentptr segment_holding(mstate m, char* addr) {2782msegmentptr sp = &m->seg;2783for (;;) {2784if (addr >= sp->base && addr < sp->base + sp->size)2785return sp;2786if ((sp = sp->next) == 0)2787return 0;2788}2789}27902791/* Return true if segment contains a segment link */2792static int has_segment_link(mstate m, msegmentptr ss) {2793msegmentptr sp = &m->seg;2794for (;;) {2795if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)2796return 1;2797if ((sp = sp->next) == 0)2798return 0;2799}2800}28012802#ifndef MORECORE_CANNOT_TRIM2803#define should_trim(M,s) ((s) > (M)->trim_check)2804#else /* MORECORE_CANNOT_TRIM */2805#define should_trim(M,s) (0)2806#endif /* MORECORE_CANNOT_TRIM */28072808/*2809TOP_FOOT_SIZE is padding at the end of a segment, including space2810that may be needed to place segment records and fenceposts when new2811noncontiguous segments are added.2812*/2813#define TOP_FOOT_SIZE \2814(align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)281528162817/* ------------------------------- Hooks -------------------------------- */28182819/*2820PREACTION should be defined to return 0 on success, and nonzero on2821failure. If you are not using locking, you can redefine these to do2822anything you like.2823*/28242825#if USE_LOCKS2826#define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)2827#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }2828#else /* USE_LOCKS */28292830#ifndef PREACTION2831#define PREACTION(M) (0)2832#endif /* PREACTION */28332834#ifndef POSTACTION2835#define POSTACTION(M)2836#endif /* POSTACTION */28372838#endif /* USE_LOCKS */28392840/*2841CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.2842USAGE_ERROR_ACTION is triggered on detected bad frees and2843reallocs. The argument p is an address that might have triggered the2844fault. It is ignored by the two predefined actions, but might be2845useful in custom actions that try to help diagnose errors.2846*/28472848#if PROCEED_ON_ERROR28492850/* A count of the number of corruption errors causing resets */2851int malloc_corruption_error_count;28522853/* default corruption action */2854static void reset_on_error(mstate m);28552856#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)2857#define USAGE_ERROR_ACTION(m, p)28582859#else /* PROCEED_ON_ERROR */28602861#ifndef CORRUPTION_ERROR_ACTION2862#define CORRUPTION_ERROR_ACTION(m) ABORT2863#endif /* CORRUPTION_ERROR_ACTION */28642865#ifndef USAGE_ERROR_ACTION2866#define USAGE_ERROR_ACTION(m,p) ABORT2867#endif /* USAGE_ERROR_ACTION */28682869#endif /* PROCEED_ON_ERROR */287028712872/* -------------------------- Debugging setup ---------------------------- */28732874#if ! DEBUG28752876#define check_free_chunk(M,P)2877#define check_inuse_chunk(M,P)2878#define check_malloced_chunk(M,P,N)2879#define check_mmapped_chunk(M,P)2880#define check_malloc_state(M)2881#define check_top_chunk(M,P)28822883#else /* DEBUG */2884#define check_free_chunk(M,P) do_check_free_chunk(M,P)2885#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)2886#define check_top_chunk(M,P) do_check_top_chunk(M,P)2887#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)2888#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)2889#define check_malloc_state(M) do_check_malloc_state(M)28902891static void do_check_any_chunk(mstate m, mchunkptr p);2892static void do_check_top_chunk(mstate m, mchunkptr p);2893static void do_check_mmapped_chunk(mstate m, mchunkptr p);2894static void do_check_inuse_chunk(mstate m, mchunkptr p);2895static void do_check_free_chunk(mstate m, mchunkptr p);2896static void do_check_malloced_chunk(mstate m, void* mem, size_t s);2897static void do_check_tree(mstate m, tchunkptr t);2898static void do_check_treebin(mstate m, bindex_t i);2899static void do_check_smallbin(mstate m, bindex_t i);2900static void do_check_malloc_state(mstate m);2901static int bin_find(mstate m, mchunkptr x);2902static size_t traverse_and_check(mstate m);2903#endif /* DEBUG */29042905/* ---------------------------- Indexing Bins ---------------------------- */29062907#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)2908#define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)2909#define small_index2size(i) ((i) << SMALLBIN_SHIFT)2910#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))29112912/* addressing by index. See above about smallbin repositioning */2913#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))2914#define treebin_at(M,i) (&((M)->treebins[i]))29152916/* assign tree index for size S to variable I. Use x86 asm if possible */2917#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__) || defined(__EMSCRIPTEN__))2918#define compute_tree_index(S, I)\2919{\2920unsigned int X = S >> TREEBIN_SHIFT;\2921if (X == 0)\2922I = 0;\2923else if (X > 0xFFFF)\2924I = NTREEBINS-1;\2925else {\2926unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \2927I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2928}\2929}29302931#elif defined (__INTEL_COMPILER)2932#define compute_tree_index(S, I)\2933{\2934size_t X = S >> TREEBIN_SHIFT;\2935if (X == 0)\2936I = 0;\2937else if (X > 0xFFFF)\2938I = NTREEBINS-1;\2939else {\2940unsigned int K = _bit_scan_reverse (X); \2941I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2942}\2943}29442945#elif defined(_MSC_VER) && _MSC_VER>=13002946#define compute_tree_index(S, I)\2947{\2948size_t X = S >> TREEBIN_SHIFT;\2949if (X == 0)\2950I = 0;\2951else if (X > 0xFFFF)\2952I = NTREEBINS-1;\2953else {\2954unsigned int K;\2955_BitScanReverse((DWORD *) &K, (DWORD) X);\2956I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\2957}\2958}29592960#else /* GNUC */2961#define compute_tree_index(S, I)\2962{\2963size_t X = S >> TREEBIN_SHIFT;\2964if (X == 0)\2965I = 0;\2966else if (X > 0xFFFF)\2967I = NTREEBINS-1;\2968else {\2969unsigned int Y = (unsigned int)X;\2970unsigned int N = ((Y - 0x100) >> 16) & 8;\2971unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\2972N += K;\2973N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\2974K = 14 - N + ((Y <<= K) >> 15);\2975I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\2976}\2977}2978#endif /* GNUC */29792980/* Bit representing maximum resolved size in a treebin at i */2981#define bit_for_tree_index(i) \2982(i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)29832984/* Shift placing maximum resolved bit in a treebin at i as sign bit */2985#define leftshift_for_tree_index(i) \2986((i == NTREEBINS-1)? 0 : \2987((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))29882989/* The size of the smallest chunk held in bin with index i */2990#define minsize_for_tree_index(i) \2991((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \2992(((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))299329942995/* ------------------------ Operations on bin maps ----------------------- */29962997/* bit corresponding to given index */2998#define idx2bit(i) ((binmap_t)(1) << (i))29993000/* Mark/Clear bits with given index */3001#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))3002#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))3003#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))30043005#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))3006#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))3007#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))30083009/* isolate the least set bit of a bitmap */3010#define least_bit(x) ((x) & -(x))30113012/* mask with all bits to left of least bit of x on */3013#define left_bits(x) ((x<<1) | -(x<<1))30143015/* mask with all bits to left of or equal to least bit of x on */3016#define same_or_left_bits(x) ((x) | -(x))30173018/* index corresponding to given bit. Use x86 asm if possible */30193020#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__) || defined(__EMSCRIPTEN__))3021#define compute_bit2idx(X, I)\3022{\3023unsigned int J;\3024J = __builtin_ctz(X); \3025I = (bindex_t)J;\3026}30273028#elif defined (__INTEL_COMPILER)3029#define compute_bit2idx(X, I)\3030{\3031unsigned int J;\3032J = _bit_scan_forward (X); \3033I = (bindex_t)J;\3034}30353036#elif defined(_MSC_VER) && _MSC_VER>=13003037#define compute_bit2idx(X, I)\3038{\3039unsigned int J;\3040_BitScanForward((DWORD *) &J, X);\3041I = (bindex_t)J;\3042}30433044#elif USE_BUILTIN_FFS3045#define compute_bit2idx(X, I) I = ffs(X)-130463047#else3048#define compute_bit2idx(X, I)\3049{\3050unsigned int Y = X - 1;\3051unsigned int K = Y >> (16-4) & 16;\3052unsigned int N = K; Y >>= K;\3053N += K = Y >> (8-3) & 8; Y >>= K;\3054N += K = Y >> (4-2) & 4; Y >>= K;\3055N += K = Y >> (2-1) & 2; Y >>= K;\3056N += K = Y >> (1-0) & 1; Y >>= K;\3057I = (bindex_t)(N + Y);\3058}3059#endif /* GNUC */306030613062/* ----------------------- Runtime Check Support ------------------------- */30633064/*3065For security, the main invariant is that malloc/free/etc never3066writes to a static address other than malloc_state, unless static3067malloc_state itself has been corrupted, which cannot occur via3068malloc (because of these checks). In essence this means that we3069believe all pointers, sizes, maps etc held in malloc_state, but3070check all of those linked or offsetted from other embedded data3071structures. These checks are interspersed with main code in a way3072that tends to minimize their run-time cost.30733074When FOOTERS is defined, in addition to range checking, we also3075verify footer fields of inuse chunks, which can be used guarantee3076that the mstate controlling malloc/free is intact. This is a3077streamlined version of the approach described by William Robertson3078et al in "Run-time Detection of Heap-based Overflows" LISA'033079http://www.usenix.org/events/lisa03/tech/robertson.html The footer3080of an inuse chunk holds the xor of its mstate and a random seed,3081that is checked upon calls to free() and realloc(). This is3082(probabalistically) unguessable from outside the program, but can be3083computed by any code successfully malloc'ing any chunk, so does not3084itself provide protection against code that has already broken3085security through some other means. Unlike Robertson et al, we3086always dynamically check addresses of all offset chunks (previous,3087next, etc). This turns out to be cheaper than relying on hashes.3088*/30893090#if !INSECURE3091/* Check if address a is at least as high as any from MORECORE or MMAP */3092#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)3093/* Check if address of next chunk n is higher than base chunk p */3094#define ok_next(p, n) ((char*)(p) < (char*)(n))3095/* Check if p has inuse status */3096#define ok_inuse(p) is_inuse(p)3097/* Check if p has its pinuse bit on */3098#define ok_pinuse(p) pinuse(p)30993100#else /* !INSECURE */3101#define ok_address(M, a) (1)3102#define ok_next(b, n) (1)3103#define ok_inuse(p) (1)3104#define ok_pinuse(p) (1)3105#endif /* !INSECURE */31063107#if (FOOTERS && !INSECURE)3108/* Check if (alleged) mstate m has expected magic field */3109#define ok_magic(M) ((M)->magic == mparams.magic)3110#else /* (FOOTERS && !INSECURE) */3111#define ok_magic(M) (1)3112#endif /* (FOOTERS && !INSECURE) */31133114/* In gcc, use __builtin_expect to minimize impact of checks */3115#if !INSECURE3116#if defined(__GNUC__) && __GNUC__ >= 33117#define RTCHECK(e) __builtin_expect(e, 1)3118#else /* GNUC */3119#define RTCHECK(e) (e)3120#endif /* GNUC */3121#else /* !INSECURE */3122#define RTCHECK(e) (1)3123#endif /* !INSECURE */31243125/* macros to set up inuse chunks with or without footers */31263127#if !FOOTERS31283129#define mark_inuse_foot(M,p,s)31303131/* Macros for setting head/foot of non-mmapped chunks */31323133/* Set cinuse bit and pinuse bit of next chunk */3134#define set_inuse(M,p,s)\3135((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\3136((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)31373138/* Set cinuse and pinuse of this chunk and pinuse of next chunk */3139#define set_inuse_and_pinuse(M,p,s)\3140((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\3141((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)31423143/* Set size, cinuse and pinuse bit of this chunk */3144#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\3145((p)->head = (s|PINUSE_BIT|CINUSE_BIT))31463147#else /* FOOTERS */31483149/* Set foot of inuse chunk to be xor of mstate and seed */3150#define mark_inuse_foot(M,p,s)\3151(((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))31523153#define get_mstate_for(p)\3154((mstate)(((mchunkptr)((char*)(p) +\3155(chunksize(p))))->prev_foot ^ mparams.magic))31563157#define set_inuse(M,p,s)\3158((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\3159(((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \3160mark_inuse_foot(M,p,s))31613162#define set_inuse_and_pinuse(M,p,s)\3163((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\3164(((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\3165mark_inuse_foot(M,p,s))31663167#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\3168((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\3169mark_inuse_foot(M, p, s))31703171#endif /* !FOOTERS */31723173/* ---------------------------- setting mparams -------------------------- */31743175#if LOCK_AT_FORK3176static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }3177static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }3178static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }3179#endif /* LOCK_AT_FORK */31803181/* Initialize mparams */3182static int init_mparams(void) {3183#ifdef NEED_GLOBAL_LOCK_INIT3184if (malloc_global_mutex_status <= 0)3185init_malloc_global_mutex();3186#endif31873188ACQUIRE_MALLOC_GLOBAL_LOCK();3189if (mparams.magic == 0) {3190size_t magic;3191size_t psize;3192size_t gsize;31933194#ifndef WIN323195psize = malloc_getpagesize;3196gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);3197#else /* WIN32 */3198{3199SYSTEM_INFO system_info;3200GetSystemInfo(&system_info);3201psize = system_info.dwPageSize;3202gsize = ((DEFAULT_GRANULARITY != 0)?3203DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);3204}3205#endif /* WIN32 */32063207/* Sanity-check configuration:3208size_t must be unsigned and as wide as pointer type.3209ints must be at least 4 bytes.3210alignment must be at least 8.3211Alignment, min chunk size, and page size must all be powers of 2.3212*/3213if ((sizeof(size_t) != sizeof(char*)) ||3214(MAX_SIZE_T < MIN_CHUNK_SIZE) ||3215(sizeof(int) < 4) ||3216(MALLOC_ALIGNMENT < (size_t)8U) ||3217((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||3218((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||3219((gsize & (gsize-SIZE_T_ONE)) != 0) ||3220((psize & (psize-SIZE_T_ONE)) != 0))3221ABORT;3222mparams.granularity = gsize;3223mparams.page_size = psize;3224mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;3225mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;3226#if MORECORE_CONTIGUOUS3227mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;3228#else /* MORECORE_CONTIGUOUS */3229mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;3230#endif /* MORECORE_CONTIGUOUS */32313232#if !ONLY_MSPACES3233/* Set up lock for main malloc area */3234gm->mflags = mparams.default_mflags;3235(void)INITIAL_LOCK(&gm->mutex);3236#endif3237#if LOCK_AT_FORK3238pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);3239#endif32403241{3242#if USE_DEV_RANDOM3243int fd;3244unsigned char buf[sizeof(size_t)];3245/* Try to use /dev/urandom, else fall back on using time */3246if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&3247read(fd, buf, sizeof(buf)) == sizeof(buf)) {3248magic = *((size_t *) buf);3249close(fd);3250}3251else3252#endif /* USE_DEV_RANDOM */3253#ifdef WIN323254magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);3255#elif defined(LACKS_TIME_H) || defined(__EMSCRIPTEN__)3256magic = (size_t)&magic ^ (size_t)0x55555555U;3257#else3258magic = (size_t)(time(0) ^ (size_t)0x55555555U);3259#endif3260magic |= (size_t)8U; /* ensure nonzero */3261magic &= ~(size_t)7U; /* improve chances of fault for bad values */3262/* Until memory modes commonly available, use volatile-write */3263(*(volatile size_t *)(&(mparams.magic))) = magic;3264}3265}32663267RELEASE_MALLOC_GLOBAL_LOCK();3268return 1;3269}32703271/* support for mallopt */3272static int change_mparam(int param_number, int value) {3273size_t val;3274ensure_initialization();3275val = (value == -1)? MAX_SIZE_T : (size_t)value;3276switch(param_number) {3277case M_TRIM_THRESHOLD:3278mparams.trim_threshold = val;3279return 1;3280case M_GRANULARITY:3281if (val >= mparams.page_size && ((val & (val-1)) == 0)) {3282mparams.granularity = val;3283return 1;3284}3285else3286return 0;3287case M_MMAP_THRESHOLD:3288mparams.mmap_threshold = val;3289return 1;3290default:3291return 0;3292}3293}32943295#if DEBUG3296/* ------------------------- Debugging Support --------------------------- */32973298/* Check properties of any chunk, whether free, inuse, mmapped etc */3299static void do_check_any_chunk(mstate m, mchunkptr p) {3300assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3301assert(ok_address(m, p));3302}33033304/* Check properties of top chunk */3305static void do_check_top_chunk(mstate m, mchunkptr p) {3306msegmentptr sp = segment_holding(m, (char*)p);3307size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */3308assert(sp != 0);3309assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3310assert(ok_address(m, p));3311assert(sz == m->topsize);3312assert(sz > 0);3313assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);3314assert(pinuse(p));3315assert(!pinuse(chunk_plus_offset(p, sz)));3316}33173318/* Check properties of (inuse) mmapped chunks */3319static void do_check_mmapped_chunk(mstate m, mchunkptr p) {3320size_t sz = chunksize(p);3321size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);3322assert(is_mmapped(p));3323assert(use_mmap(m));3324assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));3325assert(ok_address(m, p));3326assert(!is_small(sz));3327assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);3328assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);3329assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);3330}33313332/* Check properties of inuse chunks */3333static void do_check_inuse_chunk(mstate m, mchunkptr p) {3334do_check_any_chunk(m, p);3335assert(is_inuse(p));3336assert(next_pinuse(p));3337/* If not pinuse and not mmapped, previous chunk has OK offset */3338assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);3339if (is_mmapped(p))3340do_check_mmapped_chunk(m, p);3341}33423343/* Check properties of free chunks */3344static void do_check_free_chunk(mstate m, mchunkptr p) {3345size_t sz = chunksize(p);3346mchunkptr next = chunk_plus_offset(p, sz);3347do_check_any_chunk(m, p);3348assert(!is_inuse(p));3349assert(!next_pinuse(p));3350assert (!is_mmapped(p));3351if (p != m->dv && p != m->top) {3352if (sz >= MIN_CHUNK_SIZE) {3353assert((sz & CHUNK_ALIGN_MASK) == 0);3354assert(is_aligned(chunk2mem(p)));3355assert(next->prev_foot == sz);3356assert(pinuse(p));3357assert (next == m->top || is_inuse(next));3358assert(p->fd->bk == p);3359assert(p->bk->fd == p);3360}3361else /* markers are always of size SIZE_T_SIZE */3362assert(sz == SIZE_T_SIZE);3363}3364}33653366/* Check properties of malloced chunks at the point they are malloced */3367static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {3368if (mem != 0) {3369mchunkptr p = mem2chunk(mem);3370size_t sz = p->head & ~INUSE_BITS;3371do_check_inuse_chunk(m, p);3372assert((sz & CHUNK_ALIGN_MASK) == 0);3373assert(sz >= MIN_CHUNK_SIZE);3374assert(sz >= s);3375/* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */3376assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));3377}3378}33793380/* Check a tree and its subtrees. */3381static void do_check_tree(mstate m, tchunkptr t) {3382tchunkptr head = 0;3383tchunkptr u = t;3384bindex_t tindex = t->index;3385size_t tsize = chunksize(t);3386bindex_t idx;3387compute_tree_index(tsize, idx);3388assert(tindex == idx);3389assert(tsize >= MIN_LARGE_SIZE);3390assert(tsize >= minsize_for_tree_index(idx));3391assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));33923393do { /* traverse through chain of same-sized nodes */3394do_check_any_chunk(m, ((mchunkptr)u));3395assert(u->index == tindex);3396assert(chunksize(u) == tsize);3397assert(!is_inuse(u));3398assert(!next_pinuse(u));3399assert(u->fd->bk == u);3400assert(u->bk->fd == u);3401if (u->parent == 0) {3402assert(u->child[0] == 0);3403assert(u->child[1] == 0);3404}3405else {3406assert(head == 0); /* only one node on chain has parent */3407head = u;3408assert(u->parent != u);3409assert (u->parent->child[0] == u ||3410u->parent->child[1] == u ||3411*((tbinptr*)(u->parent)) == u);3412if (u->child[0] != 0) {3413assert(u->child[0]->parent == u);3414assert(u->child[0] != u);3415do_check_tree(m, u->child[0]);3416}3417if (u->child[1] != 0) {3418assert(u->child[1]->parent == u);3419assert(u->child[1] != u);3420do_check_tree(m, u->child[1]);3421}3422if (u->child[0] != 0 && u->child[1] != 0) {3423assert(chunksize(u->child[0]) < chunksize(u->child[1]));3424}3425}3426u = u->fd;3427} while (u != t);3428assert(head != 0);3429}34303431/* Check all the chunks in a treebin. */3432static void do_check_treebin(mstate m, bindex_t i) {3433tbinptr* tb = treebin_at(m, i);3434tchunkptr t = *tb;3435int empty = (m->treemap & (1U << i)) == 0;3436if (t == 0)3437assert(empty);3438if (!empty)3439do_check_tree(m, t);3440}34413442/* Check all the chunks in a smallbin. */3443static void do_check_smallbin(mstate m, bindex_t i) {3444sbinptr b = smallbin_at(m, i);3445mchunkptr p = b->bk;3446unsigned int empty = (m->smallmap & (1U << i)) == 0;3447if (p == b)3448assert(empty);3449if (!empty) {3450for (; p != b; p = p->bk) {3451size_t size = chunksize(p);3452mchunkptr q;3453/* each chunk claims to be free */3454do_check_free_chunk(m, p);3455/* chunk belongs in bin */3456assert(small_index(size) == i);3457assert(p->bk == b || chunksize(p->bk) == chunksize(p));3458/* chunk is followed by an inuse chunk */3459q = next_chunk(p);3460if (q->head != FENCEPOST_HEAD)3461do_check_inuse_chunk(m, q);3462}3463}3464}34653466/* Find x in a bin. Used in other check functions. */3467static int bin_find(mstate m, mchunkptr x) {3468size_t size = chunksize(x);3469if (is_small(size)) {3470bindex_t sidx = small_index(size);3471sbinptr b = smallbin_at(m, sidx);3472if (smallmap_is_marked(m, sidx)) {3473mchunkptr p = b;3474do {3475if (p == x)3476return 1;3477} while ((p = p->fd) != b);3478}3479}3480else {3481bindex_t tidx;3482compute_tree_index(size, tidx);3483if (treemap_is_marked(m, tidx)) {3484tchunkptr t = *treebin_at(m, tidx);3485size_t sizebits = size << leftshift_for_tree_index(tidx);3486while (t != 0 && chunksize(t) != size) {3487t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];3488sizebits <<= 1;3489}3490if (t != 0) {3491tchunkptr u = t;3492do {3493if (u == (tchunkptr)x)3494return 1;3495} while ((u = u->fd) != t);3496}3497}3498}3499return 0;3500}35013502/* Traverse each chunk and check it; return total */3503static size_t traverse_and_check(mstate m) {3504size_t sum = 0;3505if (is_initialized(m)) {3506msegmentptr s = &m->seg;3507sum += m->topsize + TOP_FOOT_SIZE;3508while (s != 0) {3509mchunkptr q = align_as_chunk(s->base);3510mchunkptr lastq = 0;3511assert(pinuse(q));3512while (segment_holds(s, q) &&3513q != m->top && q->head != FENCEPOST_HEAD) {3514sum += chunksize(q);3515if (is_inuse(q)) {3516assert(!bin_find(m, q));3517do_check_inuse_chunk(m, q);3518}3519else {3520assert(q == m->dv || bin_find(m, q));3521assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */3522do_check_free_chunk(m, q);3523}3524lastq = q;3525q = next_chunk(q);3526}3527s = s->next;3528}3529}3530return sum;3531}353235333534/* Check all properties of malloc_state. */3535static void do_check_malloc_state(mstate m) {3536bindex_t i;3537size_t total;3538/* check bins */3539for (i = 0; i < NSMALLBINS; ++i)3540do_check_smallbin(m, i);3541for (i = 0; i < NTREEBINS; ++i)3542do_check_treebin(m, i);35433544if (m->dvsize != 0) { /* check dv chunk */3545do_check_any_chunk(m, m->dv);3546assert(m->dvsize == chunksize(m->dv));3547assert(m->dvsize >= MIN_CHUNK_SIZE);3548assert(bin_find(m, m->dv) == 0);3549}35503551if (m->top != 0) { /* check top chunk */3552do_check_top_chunk(m, m->top);3553/*assert(m->topsize == chunksize(m->top)); redundant */3554assert(m->topsize > 0);3555assert(bin_find(m, m->top) == 0);3556}35573558total = traverse_and_check(m);3559assert(total <= m->footprint);3560assert(m->footprint <= m->max_footprint);3561}3562#endif /* DEBUG */35633564/* ----------------------------- statistics ------------------------------ */35653566#if !NO_MALLINFO3567static struct mallinfo internal_mallinfo(mstate m) {3568struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };3569ensure_initialization();3570if (!PREACTION(m)) {3571check_malloc_state(m);3572if (is_initialized(m)) {3573size_t nfree = SIZE_T_ONE; /* top always free */3574size_t mfree = m->topsize + TOP_FOOT_SIZE;3575size_t sum = mfree;3576msegmentptr s = &m->seg;3577while (s != 0) {3578mchunkptr q = align_as_chunk(s->base);3579while (segment_holds(s, q) &&3580q != m->top && q->head != FENCEPOST_HEAD) {3581size_t sz = chunksize(q);3582sum += sz;3583if (!is_inuse(q)) {3584mfree += sz;3585++nfree;3586}3587q = next_chunk(q);3588}3589s = s->next;3590}35913592nm.arena = sum;3593nm.ordblks = nfree;3594nm.hblkhd = m->footprint - sum;3595nm.usmblks = m->max_footprint;3596nm.uordblks = m->footprint - mfree;3597nm.fordblks = mfree;3598nm.keepcost = m->topsize;3599}36003601POSTACTION(m);3602}3603return nm;3604}3605#endif /* !NO_MALLINFO */36063607#if !NO_MALLOC_STATS3608static void internal_malloc_stats(mstate m) {3609ensure_initialization();3610if (!PREACTION(m)) {3611size_t maxfp = 0;3612size_t fp = 0;3613size_t used = 0;3614check_malloc_state(m);3615if (is_initialized(m)) {3616msegmentptr s = &m->seg;3617maxfp = m->max_footprint;3618fp = m->footprint;3619used = fp - (m->topsize + TOP_FOOT_SIZE);36203621while (s != 0) {3622mchunkptr q = align_as_chunk(s->base);3623while (segment_holds(s, q) &&3624q != m->top && q->head != FENCEPOST_HEAD) {3625if (!is_inuse(q))3626used -= chunksize(q);3627q = next_chunk(q);3628}3629s = s->next;3630}3631}3632POSTACTION(m); /* drop lock */3633fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));3634fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));3635fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));3636}3637}3638#endif /* NO_MALLOC_STATS */36393640/* ----------------------- Operations on smallbins ----------------------- */36413642/*3643Various forms of linking and unlinking are defined as macros. Even3644the ones for trees, which are very long but have very short typical3645paths. This is ugly but reduces reliance on inlining support of3646compilers.3647*/36483649/* Link a free chunk into a smallbin */3650#define insert_small_chunk(M, P, S) {\3651bindex_t I = small_index(S);\3652mchunkptr B = smallbin_at(M, I);\3653mchunkptr F = B;\3654assert(S >= MIN_CHUNK_SIZE);\3655if (!smallmap_is_marked(M, I))\3656mark_smallmap(M, I);\3657else if (RTCHECK(ok_address(M, B->fd)))\3658F = B->fd;\3659else {\3660CORRUPTION_ERROR_ACTION(M);\3661}\3662B->fd = P;\3663F->bk = P;\3664P->fd = F;\3665P->bk = B;\3666}36673668/* Unlink a chunk from a smallbin */3669#define unlink_small_chunk(M, P, S) {\3670mchunkptr F = P->fd;\3671mchunkptr B = P->bk;\3672bindex_t I = small_index(S);\3673assert(P != B);\3674assert(P != F);\3675assert(chunksize(P) == small_index2size(I));\3676if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \3677if (B == F) {\3678clear_smallmap(M, I);\3679}\3680else if (RTCHECK(B == smallbin_at(M,I) ||\3681(ok_address(M, B) && B->fd == P))) {\3682F->bk = B;\3683B->fd = F;\3684}\3685else {\3686CORRUPTION_ERROR_ACTION(M);\3687}\3688}\3689else {\3690CORRUPTION_ERROR_ACTION(M);\3691}\3692}36933694/* Unlink the first chunk from a smallbin */3695#define unlink_first_small_chunk(M, B, P, I) {\3696mchunkptr F = P->fd;\3697assert(P != B);\3698assert(P != F);\3699assert(chunksize(P) == small_index2size(I));\3700if (B == F) {\3701clear_smallmap(M, I);\3702}\3703else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\3704F->bk = B;\3705B->fd = F;\3706}\3707else {\3708CORRUPTION_ERROR_ACTION(M);\3709}\3710}37113712/* Replace dv node, binning the old one */3713/* Used only when dvsize known to be small */3714#define replace_dv(M, P, S) {\3715size_t DVS = M->dvsize;\3716assert(is_small(DVS));\3717if (DVS != 0) {\3718mchunkptr DV = M->dv;\3719insert_small_chunk(M, DV, DVS);\3720}\3721M->dvsize = S;\3722M->dv = P;\3723}37243725/* ------------------------- Operations on trees ------------------------- */37263727/* Insert chunk into tree */3728#define insert_large_chunk(M, X, S) {\3729tbinptr* H;\3730bindex_t I;\3731compute_tree_index(S, I);\3732H = treebin_at(M, I);\3733X->index = I;\3734X->child[0] = X->child[1] = 0;\3735if (!treemap_is_marked(M, I)) {\3736mark_treemap(M, I);\3737*H = X;\3738X->parent = (tchunkptr)H;\3739X->fd = X->bk = X;\3740}\3741else {\3742tchunkptr T = *H;\3743size_t K = S << leftshift_for_tree_index(I);\3744for (;;) {\3745if (chunksize(T) != S) {\3746tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\3747K <<= 1;\3748if (*C != 0)\3749T = *C;\3750else if (RTCHECK(ok_address(M, C))) {\3751*C = X;\3752X->parent = T;\3753X->fd = X->bk = X;\3754break;\3755}\3756else {\3757CORRUPTION_ERROR_ACTION(M);\3758break;\3759}\3760}\3761else {\3762tchunkptr F = T->fd;\3763if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\3764T->fd = F->bk = X;\3765X->fd = F;\3766X->bk = T;\3767X->parent = 0;\3768break;\3769}\3770else {\3771CORRUPTION_ERROR_ACTION(M);\3772break;\3773}\3774}\3775}\3776}\3777}37783779/*3780Unlink steps:378137821. If x is a chained node, unlink it from its same-sized fd/bk links3783and choose its bk node as its replacement.37842. If x was the last node of its size, but not a leaf node, it must3785be replaced with a leaf node (not merely one with an open left or3786right), to make sure that lefts and rights of descendents3787correspond properly to bit masks. We use the rightmost descendent3788of x. We could use any other leaf, but this is easy to locate and3789tends to counteract removal of leftmosts elsewhere, and so keeps3790paths shorter than minimally guaranteed. This doesn't loop much3791because on average a node in a tree is near the bottom.37923. If x is the base of a chain (i.e., has parent links) relink3793x's parent and children to x's replacement (or null if none).3794*/37953796#define unlink_large_chunk(M, X) { \3797tchunkptr XP = X->parent; \3798tchunkptr R; \3799if (X->bk != X) { \3800tchunkptr F = X->fd; \3801R = X->bk; \3802if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) { \3803F->bk = R; \3804R->fd = F; \3805} \3806else { \3807CORRUPTION_ERROR_ACTION(M); \3808} \3809} \3810else { \3811tchunkptr* RP; \3812if (((R = *(RP = &(X->child[1]))) != 0) || \3813((R = *(RP = &(X->child[0]))) != 0)) { \3814tchunkptr* CP; \3815while ((*(CP = &(R->child[1])) != 0) || \3816(*(CP = &(R->child[0])) != 0)) { \3817R = *(RP = CP); \3818} \3819if (RTCHECK(ok_address(M, RP))) \3820*RP = 0; \3821else { \3822CORRUPTION_ERROR_ACTION(M); \3823} \3824} \3825} \3826if (XP != 0) { \3827tbinptr* H = treebin_at(M, X->index); \3828if (X == *H) { \3829if ((*H = R) == 0) \3830clear_treemap(M, X->index); \3831} \3832else if (RTCHECK(ok_address(M, XP))) { \3833if (XP->child[0] == X) \3834XP->child[0] = R; \3835else \3836XP->child[1] = R; \3837} \3838else \3839CORRUPTION_ERROR_ACTION(M); \3840if (R != 0) { \3841if (RTCHECK(ok_address(M, R))) { \3842tchunkptr C0, C1; \3843R->parent = XP; \3844if ((C0 = X->child[0]) != 0) { \3845if (RTCHECK(ok_address(M, C0))) { \3846R->child[0] = C0; \3847C0->parent = R; \3848} \3849else \3850CORRUPTION_ERROR_ACTION(M); \3851} \3852if ((C1 = X->child[1]) != 0) { \3853if (RTCHECK(ok_address(M, C1))) { \3854R->child[1] = C1; \3855C1->parent = R; \3856} \3857else \3858CORRUPTION_ERROR_ACTION(M); \3859} \3860} \3861else \3862CORRUPTION_ERROR_ACTION(M); \3863} \3864} \3865}38663867/* Relays to large vs small bin operations */38683869#define insert_chunk(M, P, S) \3870if (is_small(S)) insert_small_chunk(M, P, S) \3871else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }38723873#define unlink_chunk(M, P, S) \3874if (is_small(S)) unlink_small_chunk(M, P, S) \3875else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }387638773878/* Relays to internal calls to malloc/free from realloc, memalign etc */38793880#if ONLY_MSPACES3881#define internal_malloc(m, b) mspace_malloc(m, b)3882#define internal_free(m, mem) mspace_free(m,mem);3883#else /* ONLY_MSPACES */3884#if MSPACES3885#define internal_malloc(m, b)\3886((m == gm)? dlmalloc(b) : mspace_malloc(m, b))3887#define internal_free(m, mem)\3888if (m == gm) dlfree(mem); else mspace_free(m,mem);3889#else /* MSPACES */3890#define internal_malloc(m, b) dlmalloc(b)3891#define internal_free(m, mem) dlfree(mem)3892#endif /* MSPACES */3893#endif /* ONLY_MSPACES */38943895/* ----------------------- Direct-mmapping chunks ----------------------- */38963897/*3898Directly mmapped chunks are set up with an offset to the start of3899the mmapped region stored in the prev_foot field of the chunk. This3900allows reconstruction of the required argument to MUNMAP when freed,3901and also allows adjustment of the returned chunk to meet alignment3902requirements (especially in memalign).3903*/39043905/* Malloc using mmap */3906static void* mmap_alloc(mstate m, size_t nb) {3907size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3908if (m->footprint_limit != 0) {3909size_t fp = m->footprint + mmsize;3910if (fp <= m->footprint || fp > m->footprint_limit)3911return 0;3912}3913if (mmsize > nb) { /* Check for wrap around 0 */3914char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));3915if (mm != CMFAIL) {3916size_t offset = align_offset(chunk2mem(mm));3917size_t psize = mmsize - offset - MMAP_FOOT_PAD;3918mchunkptr p = (mchunkptr)(mm + offset);3919p->prev_foot = offset;3920p->head = psize;3921mark_inuse_foot(m, p, psize);3922chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;3923chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;39243925if (m->least_addr == 0 || mm < m->least_addr)3926m->least_addr = mm;3927if ((m->footprint += mmsize) > m->max_footprint)3928m->max_footprint = m->footprint;3929assert(is_aligned(chunk2mem(p)));3930check_mmapped_chunk(m, p);3931return chunk2mem(p);3932}3933}3934return 0;3935}39363937/* Realloc using mmap */3938static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {3939size_t oldsize = chunksize(oldp);3940(void)flags; /* placate people compiling -Wunused */3941if (is_small(nb)) /* Can't shrink mmap regions below small size */3942return 0;3943/* Keep old chunk if big enough but not too big */3944if (oldsize >= nb + SIZE_T_SIZE &&3945(oldsize - nb) <= (mparams.granularity << 1))3946return oldp;3947else {3948size_t offset = oldp->prev_foot;3949size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;3950size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);3951char* cp = (char*)CALL_MREMAP((char*)oldp - offset,3952oldmmsize, newmmsize, flags);3953if (cp != CMFAIL) {3954mchunkptr newp = (mchunkptr)(cp + offset);3955size_t psize = newmmsize - offset - MMAP_FOOT_PAD;3956newp->head = psize;3957mark_inuse_foot(m, newp, psize);3958chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;3959chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;39603961if (cp < m->least_addr)3962m->least_addr = cp;3963if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)3964m->max_footprint = m->footprint;3965check_mmapped_chunk(m, newp);3966return newp;3967}3968}3969return 0;3970}397139723973/* -------------------------- mspace management -------------------------- */39743975/* Initialize top chunk and its size */3976static void init_top(mstate m, mchunkptr p, size_t psize) {3977/* Ensure alignment */3978size_t offset = align_offset(chunk2mem(p));3979p = (mchunkptr)((char*)p + offset);3980psize -= offset;39813982m->top = p;3983m->topsize = psize;3984p->head = psize | PINUSE_BIT;3985/* set size of fake trailing chunk holding overhead space only once */3986chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;3987m->trim_check = mparams.trim_threshold; /* reset on each update */3988}39893990/* Initialize bins for a new mstate that is otherwise zeroed out */3991static void init_bins(mstate m) {3992/* Establish circular links for smallbins */3993bindex_t i;3994for (i = 0; i < NSMALLBINS; ++i) {3995sbinptr bin = smallbin_at(m,i);3996bin->fd = bin->bk = bin;3997}3998}39994000#if PROCEED_ON_ERROR40014002/* default corruption action */4003static void reset_on_error(mstate m) {4004int i;4005++malloc_corruption_error_count;4006/* Reinitialize fields to forget about all memory */4007m->smallmap = m->treemap = 0;4008m->dvsize = m->topsize = 0;4009m->seg.base = 0;4010m->seg.size = 0;4011m->seg.next = 0;4012m->top = m->dv = 0;4013for (i = 0; i < NTREEBINS; ++i)4014*treebin_at(m, i) = 0;4015init_bins(m);4016}4017#endif /* PROCEED_ON_ERROR */40184019/* Allocate chunk and prepend remainder with chunk in successor base. */4020static void* prepend_alloc(mstate m, char* newbase, char* oldbase,4021size_t nb) {4022mchunkptr p = align_as_chunk(newbase);4023mchunkptr oldfirst = align_as_chunk(oldbase);4024size_t psize = (char*)oldfirst - (char*)p;4025mchunkptr q = chunk_plus_offset(p, nb);4026size_t qsize = psize - nb;4027set_size_and_pinuse_of_inuse_chunk(m, p, nb);40284029assert((char*)oldfirst > (char*)q);4030assert(pinuse(oldfirst));4031assert(qsize >= MIN_CHUNK_SIZE);40324033/* consolidate remainder with first chunk of old base */4034if (oldfirst == m->top) {4035size_t tsize = m->topsize += qsize;4036m->top = q;4037q->head = tsize | PINUSE_BIT;4038check_top_chunk(m, q);4039}4040else if (oldfirst == m->dv) {4041size_t dsize = m->dvsize += qsize;4042m->dv = q;4043set_size_and_pinuse_of_free_chunk(q, dsize);4044}4045else {4046if (!is_inuse(oldfirst)) {4047size_t nsize = chunksize(oldfirst);4048unlink_chunk(m, oldfirst, nsize);4049oldfirst = chunk_plus_offset(oldfirst, nsize);4050qsize += nsize;4051}4052set_free_with_pinuse(q, qsize, oldfirst);4053insert_chunk(m, q, qsize);4054check_free_chunk(m, q);4055}40564057check_malloced_chunk(m, chunk2mem(p), nb);4058return chunk2mem(p);4059}40604061/* Add a segment to hold a new noncontiguous region */4062static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {4063/* Determine locations and sizes of segment, fenceposts, old top */4064char* old_top = (char*)m->top;4065msegmentptr oldsp = segment_holding(m, old_top);4066char* old_end = oldsp->base + oldsp->size;4067size_t ssize = pad_request(sizeof(struct malloc_segment));4068char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);4069size_t offset = align_offset(chunk2mem(rawsp));4070char* asp = rawsp + offset;4071char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;4072mchunkptr sp = (mchunkptr)csp;4073msegmentptr ss = (msegmentptr)(chunk2mem(sp));4074mchunkptr tnext = chunk_plus_offset(sp, ssize);4075mchunkptr p = tnext;4076int nfences = 0;40774078/* reset top to new space */4079init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);40804081/* Set up segment record */4082assert(is_aligned(ss));4083set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);4084*ss = m->seg; /* Push current record */4085m->seg.base = tbase;4086m->seg.size = tsize;4087m->seg.sflags = mmapped;4088m->seg.next = ss;40894090/* Insert trailing fenceposts */4091for (;;) {4092mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);4093p->head = FENCEPOST_HEAD;4094++nfences;4095if ((char*)(&(nextp->head)) < old_end)4096p = nextp;4097else4098break;4099}4100assert(nfences >= 2);41014102/* Insert the rest of old top into a bin as an ordinary free chunk */4103if (csp != old_top) {4104mchunkptr q = (mchunkptr)old_top;4105size_t psize = csp - old_top;4106mchunkptr tn = chunk_plus_offset(q, psize);4107set_free_with_pinuse(q, psize, tn);4108insert_chunk(m, q, psize);4109}41104111check_top_chunk(m, m->top);4112}41134114/* -------------------------- System allocation -------------------------- */41154116/* Get memory from system using MORECORE or MMAP */4117static void* sys_alloc(mstate m, size_t nb) {4118char* tbase = CMFAIL;4119size_t tsize = 0;4120flag_t mmap_flag = 0;4121size_t asize; /* allocation size */41224123ensure_initialization();41244125/* Directly map large chunks, but only if already initialized */4126if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {4127void* mem = mmap_alloc(m, nb);4128if (mem != 0)4129return mem;4130}41314132asize = granularity_align(nb + SYS_ALLOC_PADDING);4133if (asize <= nb)4134return 0; /* wraparound */4135if (m->footprint_limit != 0) {4136size_t fp = m->footprint + asize;4137if (fp <= m->footprint || fp > m->footprint_limit)4138return 0;4139}41404141/*4142Try getting memory in any of three ways (in most-preferred to4143least-preferred order):41441. A call to MORECORE that can normally contiguously extend memory.4145(disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or4146or main space is mmapped or a previous contiguous call failed)41472. A call to MMAP new space (disabled if not HAVE_MMAP).4148Note that under the default settings, if MORECORE is unable to4149fulfill a request, and HAVE_MMAP is true, then mmap is4150used as a noncontiguous system allocator. This is a useful backup4151strategy for systems with holes in address spaces -- in this case4152sbrk cannot contiguously expand the heap, but mmap may be able to4153find space.41543. A call to MORECORE that cannot usually contiguously extend memory.4155(disabled if not HAVE_MORECORE)41564157In all cases, we need to request enough bytes from system to ensure4158we can malloc nb bytes upon success, so pad with enough space for4159top_foot, plus alignment-pad to make sure we don't lose bytes if4160not on boundary, and round this up to a granularity unit.4161*/41624163if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {4164char* br = CMFAIL;4165size_t ssize = asize; /* sbrk call size */4166msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);4167ACQUIRE_MALLOC_GLOBAL_LOCK();41684169if (ss == 0) { /* First time through or recovery */4170char* base = (char*)CALL_MORECORE(0);4171if (base != CMFAIL) {4172size_t fp;4173/* Adjust to end on a page boundary */4174if (!is_page_aligned(base))4175ssize += (page_align((size_t)base) - (size_t)base);4176fp = m->footprint + ssize; /* recheck limits */4177if (ssize > nb &&4178(UNSIGNED_MORECORE || ssize < HALF_MAX_SIZE_T) &&4179(m->footprint_limit == 0 ||4180(fp > m->footprint && fp <= m->footprint_limit)) &&4181(br = (char*)(CALL_MORECORE(ssize))) == base) {4182tbase = base;4183tsize = ssize;4184}4185}4186}4187else {4188/* Subtract out existing available top space from MORECORE request. */4189ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);4190/* Use mem here only if it did continuously extend old space */4191if ((UNSIGNED_MORECORE || ssize < HALF_MAX_SIZE_T) &&4192(br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {4193tbase = br;4194tsize = ssize;4195}4196}41974198if (tbase == CMFAIL) { /* Cope with partial failure */4199if (br != CMFAIL) { /* Try to use/extend the space we did get */4200if ((UNSIGNED_MORECORE || ssize < HALF_MAX_SIZE_T) &&4201ssize < nb + SYS_ALLOC_PADDING) {4202size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);4203if (UNSIGNED_MORECORE || esize < HALF_MAX_SIZE_T) {4204char* end = (char*)CALL_MORECORE(esize);4205if (end != CMFAIL)4206ssize += esize;4207else { /* Can't use; try to release */4208if (!UNSIGNED_MORECORE) {4209(void) CALL_MORECORE(-ssize);4210}4211br = CMFAIL;4212}4213}4214}4215}4216if (br != CMFAIL) { /* Use the space we did get */4217tbase = br;4218tsize = ssize;4219}4220else4221disable_contiguous(m); /* Don't try contiguous path in the future */4222}42234224RELEASE_MALLOC_GLOBAL_LOCK();4225}42264227if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */4228char* mp = (char*)(CALL_MMAP(asize));4229if (mp != CMFAIL) {4230tbase = mp;4231tsize = asize;4232mmap_flag = USE_MMAP_BIT;4233}4234}42354236if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */4237if (UNSIGNED_MORECORE || asize < HALF_MAX_SIZE_T) {4238char* br = CMFAIL;4239char* end = CMFAIL;4240ACQUIRE_MALLOC_GLOBAL_LOCK();4241br = (char*)(CALL_MORECORE(asize));4242end = (char*)(CALL_MORECORE(0));4243RELEASE_MALLOC_GLOBAL_LOCK();4244if (br != CMFAIL && end != CMFAIL && br < end) {4245size_t ssize = end - br;4246if (ssize > nb + TOP_FOOT_SIZE) {4247tbase = br;4248tsize = ssize;4249}4250}4251}4252}42534254if (tbase != CMFAIL) {42554256if ((m->footprint += tsize) > m->max_footprint)4257m->max_footprint = m->footprint;42584259if (!is_initialized(m)) { /* first-time initialization */4260if (m->least_addr == 0 || tbase < m->least_addr)4261m->least_addr = tbase;4262m->seg.base = tbase;4263m->seg.size = tsize;4264m->seg.sflags = mmap_flag;4265m->magic = mparams.magic;4266m->release_checks = MAX_RELEASE_CHECK_RATE;4267init_bins(m);4268#if !ONLY_MSPACES4269if (is_global(m))4270init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);4271else4272#endif4273{4274/* Offset top by embedded malloc_state */4275mchunkptr mn = next_chunk(mem2chunk(m));4276init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);4277}4278}42794280else {4281/* Try to merge with an existing segment */4282msegmentptr sp = &m->seg;4283/* Only consider most recent segment if traversal suppressed */4284while (sp != 0 && tbase != sp->base + sp->size)4285sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;4286if (sp != 0 &&4287!is_extern_segment(sp) &&4288(sp->sflags & USE_MMAP_BIT) == mmap_flag &&4289segment_holds(sp, m->top)) { /* append */4290sp->size += tsize;4291init_top(m, m->top, m->topsize + tsize);4292}4293else {4294if (tbase < m->least_addr)4295m->least_addr = tbase;4296sp = &m->seg;4297while (sp != 0 && sp->base != tbase + tsize)4298sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;4299if (sp != 0 &&4300!is_extern_segment(sp) &&4301(sp->sflags & USE_MMAP_BIT) == mmap_flag) {4302char* oldbase = sp->base;4303sp->base = tbase;4304sp->size += tsize;4305return prepend_alloc(m, tbase, oldbase, nb);4306}4307else4308add_segment(m, tbase, tsize, mmap_flag);4309}4310}43114312if (nb < m->topsize) { /* Allocate from new or extended top space */4313size_t rsize = m->topsize -= nb;4314mchunkptr p = m->top;4315mchunkptr r = m->top = chunk_plus_offset(p, nb);4316r->head = rsize | PINUSE_BIT;4317set_size_and_pinuse_of_inuse_chunk(m, p, nb);4318check_top_chunk(m, m->top);4319check_malloced_chunk(m, chunk2mem(p), nb);4320return chunk2mem(p);4321}4322}43234324MALLOC_FAILURE_ACTION;4325return 0;4326}43274328/* ----------------------- system deallocation -------------------------- */43294330/* Unmap and unlink any mmapped segments that don't contain used chunks */4331static size_t release_unused_segments(mstate m) {4332size_t released = 0;4333int nsegs = 0;4334msegmentptr pred = &m->seg;4335msegmentptr sp = pred->next;4336while (sp != 0) {4337char* base = sp->base;4338size_t size = sp->size;4339msegmentptr next = sp->next;4340++nsegs;4341if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {4342mchunkptr p = align_as_chunk(base);4343size_t psize = chunksize(p);4344/* Can unmap if first chunk holds entire segment and not pinned */4345if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {4346tchunkptr tp = (tchunkptr)p;4347assert(segment_holds(sp, (char*)sp));4348if (p == m->dv) {4349m->dv = 0;4350m->dvsize = 0;4351}4352else {4353unlink_large_chunk(m, tp);4354}4355if (CALL_MUNMAP(base, size) == 0) {4356released += size;4357m->footprint -= size;4358/* unlink obsoleted record */4359sp = pred;4360sp->next = next;4361}4362else { /* back out if cannot unmap */4363insert_large_chunk(m, tp, psize);4364}4365}4366}4367if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */4368break;4369pred = sp;4370sp = next;4371}4372/* Reset check counter */4373m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?4374(size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);4375return released;4376}43774378static int sys_trim(mstate m, size_t pad) {4379size_t released = 0;4380ensure_initialization();4381if (pad < MAX_REQUEST && is_initialized(m)) {4382pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */43834384if (m->topsize > pad) {4385/* Shrink top space in granularity-size units, keeping at least one */4386size_t unit = mparams.granularity;4387size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -4388SIZE_T_ONE) * unit;4389msegmentptr sp = segment_holding(m, (char*)m->top);43904391if (!is_extern_segment(sp)) {4392if (is_mmapped_segment(sp)) {4393if (HAVE_MMAP &&4394sp->size >= extra &&4395!has_segment_link(m, sp)) { /* can't shrink if pinned */4396size_t newsize = sp->size - extra;4397(void)newsize; /* placate people compiling -Wunused-variable */4398/* Prefer mremap, fall back to munmap */4399if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||4400(CALL_MUNMAP(sp->base + newsize, extra) == 0)) {4401released = extra;4402}4403}4404}4405else if (HAVE_MORECORE) {4406#ifndef MORECORE_CANNOT_TRIM4407if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */4408extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;4409ACQUIRE_MALLOC_GLOBAL_LOCK();4410{4411/* Make sure end of memory is where we last set it. */4412char* old_br = (char*)(CALL_MORECORE(0));4413if (old_br == sp->base + sp->size) {4414char* rel_br = (char*)(CALL_MORECORE(-extra));4415char* new_br = (char*)(CALL_MORECORE(0));4416if (rel_br != CMFAIL && new_br < old_br)4417released = old_br - new_br;4418}4419}4420RELEASE_MALLOC_GLOBAL_LOCK();4421#endif4422}4423}44244425if (released != 0) {4426sp->size -= released;4427m->footprint -= released;4428init_top(m, m->top, m->topsize - released);4429check_top_chunk(m, m->top);4430}4431}44324433/* Unmap any unused mmapped segments */4434if (HAVE_MMAP)4435released += release_unused_segments(m);44364437/* On failure, disable autotrim to avoid repeated failed future calls */4438if (released == 0 && m->topsize > m->trim_check)4439m->trim_check = MAX_SIZE_T;4440}44414442return (released != 0)? 1 : 0;4443}44444445/* Consolidate and bin a chunk. Differs from exported versions4446of free mainly in that the chunk need not be marked as inuse.4447*/4448static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {4449mchunkptr next = chunk_plus_offset(p, psize);4450if (!pinuse(p)) {4451mchunkptr prev;4452size_t prevsize = p->prev_foot;4453if (is_mmapped(p)) {4454psize += prevsize + MMAP_FOOT_PAD;4455if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)4456m->footprint -= psize;4457return;4458}4459prev = chunk_minus_offset(p, prevsize);4460psize += prevsize;4461p = prev;4462if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */4463if (p != m->dv) {4464unlink_chunk(m, p, prevsize);4465}4466else if ((next->head & INUSE_BITS) == INUSE_BITS) {4467m->dvsize = psize;4468set_free_with_pinuse(p, psize, next);4469return;4470}4471}4472else {4473CORRUPTION_ERROR_ACTION(m);4474return;4475}4476}4477if (RTCHECK(ok_address(m, next))) {4478if (!cinuse(next)) { /* consolidate forward */4479if (next == m->top) {4480size_t tsize = m->topsize += psize;4481m->top = p;4482p->head = tsize | PINUSE_BIT;4483if (p == m->dv) {4484m->dv = 0;4485m->dvsize = 0;4486}4487return;4488}4489else if (next == m->dv) {4490size_t dsize = m->dvsize += psize;4491m->dv = p;4492set_size_and_pinuse_of_free_chunk(p, dsize);4493return;4494}4495else {4496size_t nsize = chunksize(next);4497psize += nsize;4498unlink_chunk(m, next, nsize);4499set_size_and_pinuse_of_free_chunk(p, psize);4500if (p == m->dv) {4501m->dvsize = psize;4502return;4503}4504}4505}4506else {4507set_free_with_pinuse(p, psize, next);4508}4509insert_chunk(m, p, psize);4510}4511else {4512CORRUPTION_ERROR_ACTION(m);4513}4514}45154516/* ---------------------------- malloc --------------------------- */45174518/* allocate a large request from the best fitting chunk in a treebin */4519static void* tmalloc_large(mstate m, size_t nb) {4520tchunkptr v = 0;4521size_t rsize = -nb; /* Unsigned negation */4522tchunkptr t;4523bindex_t idx;4524compute_tree_index(nb, idx);4525if ((t = *treebin_at(m, idx)) != 0) {4526/* Traverse tree for this bin looking for node with size == nb */4527size_t sizebits = nb << leftshift_for_tree_index(idx);4528tchunkptr rst = 0; /* The deepest untaken right subtree */4529for (;;) {4530tchunkptr rt;4531size_t trem = chunksize(t) - nb;4532if (trem < rsize) {4533v = t;4534if ((rsize = trem) == 0)4535break;4536}4537rt = t->child[1];4538t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];4539if (rt != 0 && rt != t)4540rst = rt;4541if (t == 0) {4542t = rst; /* set t to least subtree holding sizes > nb */4543break;4544}4545sizebits <<= 1;4546}4547}4548if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */4549binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;4550if (leftbits != 0) {4551bindex_t i;4552binmap_t leastbit = least_bit(leftbits);4553compute_bit2idx(leastbit, i);4554t = *treebin_at(m, i);4555}4556}45574558while (t != 0) { /* find smallest of tree or subtree */4559size_t trem = chunksize(t) - nb;4560if (trem < rsize) {4561rsize = trem;4562v = t;4563}4564t = leftmost_child(t);4565}45664567/* If dv is a better fit, return 0 so malloc will use it */4568if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {4569if (RTCHECK(ok_address(m, v))) { /* split */4570mchunkptr r = chunk_plus_offset(v, nb);4571assert(chunksize(v) == rsize + nb);4572if (RTCHECK(ok_next(v, r))) {4573unlink_large_chunk(m, v);4574if (rsize < MIN_CHUNK_SIZE)4575set_inuse_and_pinuse(m, v, (rsize + nb));4576else {4577set_size_and_pinuse_of_inuse_chunk(m, v, nb);4578set_size_and_pinuse_of_free_chunk(r, rsize);4579insert_chunk(m, r, rsize);4580}4581return chunk2mem(v);4582}4583}4584CORRUPTION_ERROR_ACTION(m);4585}4586return 0;4587}45884589/* allocate a small request from the best fitting chunk in a treebin */4590static void* tmalloc_small(mstate m, size_t nb) {4591tchunkptr t, v;4592size_t rsize;4593bindex_t i;4594binmap_t leastbit = least_bit(m->treemap);4595compute_bit2idx(leastbit, i);4596v = t = *treebin_at(m, i);4597rsize = chunksize(t) - nb;45984599while ((t = leftmost_child(t)) != 0) {4600size_t trem = chunksize(t) - nb;4601if (trem < rsize) {4602rsize = trem;4603v = t;4604}4605}46064607if (RTCHECK(ok_address(m, v))) {4608mchunkptr r = chunk_plus_offset(v, nb);4609assert(chunksize(v) == rsize + nb);4610if (RTCHECK(ok_next(v, r))) {4611unlink_large_chunk(m, v);4612if (rsize < MIN_CHUNK_SIZE)4613set_inuse_and_pinuse(m, v, (rsize + nb));4614else {4615set_size_and_pinuse_of_inuse_chunk(m, v, nb);4616set_size_and_pinuse_of_free_chunk(r, rsize);4617replace_dv(m, r, rsize);4618}4619return chunk2mem(v);4620}4621}46224623CORRUPTION_ERROR_ACTION(m);4624return 0;4625}46264627#if !ONLY_MSPACES46284629void* dlmalloc(size_t bytes) {4630/*4631Basic algorithm:4632If a small request (< 256 bytes minus per-chunk overhead):46331. If one exists, use a remainderless chunk in associated smallbin.4634(Remainderless means that there are too few excess bytes to4635represent as a chunk.)46362. If it is big enough, use the dv chunk, which is normally the4637chunk adjacent to the one used for the most recent small request.46383. If one exists, split the smallest available chunk in a bin,4639saving remainder in dv.46404. If it is big enough, use the top chunk.46415. If available, get memory from system and use it4642Otherwise, for a large request:46431. Find the smallest available binned chunk that fits, and use it4644if it is better fitting than dv chunk, splitting if necessary.46452. If better fitting than any binned chunk, use the dv chunk.46463. If it is big enough, use the top chunk.46474. If request size >= mmap threshold, try to directly mmap this chunk.46485. If available, get memory from system and use it46494650The ugly goto's here ensure that postaction occurs along all paths.4651*/46524653#if USE_LOCKS4654ensure_initialization(); /* initialize in sys_alloc if not using locks */4655#endif46564657if (!PREACTION(gm)) {4658void* mem;4659size_t nb;4660if (bytes <= MAX_SMALL_REQUEST) {4661bindex_t idx;4662binmap_t smallbits;4663nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);4664idx = small_index(nb);4665smallbits = gm->smallmap >> idx;46664667if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */4668mchunkptr b, p;4669idx += ~smallbits & 1; /* Uses next bin if idx empty */4670b = smallbin_at(gm, idx);4671p = b->fd;4672assert(chunksize(p) == small_index2size(idx));4673unlink_first_small_chunk(gm, b, p, idx);4674set_inuse_and_pinuse(gm, p, small_index2size(idx));4675mem = chunk2mem(p);4676check_malloced_chunk(gm, mem, nb);4677goto postaction;4678}46794680else if (nb > gm->dvsize) {4681if (smallbits != 0) { /* Use chunk in next nonempty smallbin */4682mchunkptr b, p, r;4683size_t rsize;4684bindex_t i;4685binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));4686binmap_t leastbit = least_bit(leftbits);4687compute_bit2idx(leastbit, i);4688b = smallbin_at(gm, i);4689p = b->fd;4690assert(chunksize(p) == small_index2size(i));4691unlink_first_small_chunk(gm, b, p, i);4692rsize = small_index2size(i) - nb;4693/* Fit here cannot be remainderless if 4byte sizes */4694if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)4695set_inuse_and_pinuse(gm, p, small_index2size(i));4696else {4697set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4698r = chunk_plus_offset(p, nb);4699set_size_and_pinuse_of_free_chunk(r, rsize);4700replace_dv(gm, r, rsize);4701}4702mem = chunk2mem(p);4703check_malloced_chunk(gm, mem, nb);4704goto postaction;4705}47064707else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {4708check_malloced_chunk(gm, mem, nb);4709goto postaction;4710}4711}4712}4713else if (bytes >= MAX_REQUEST)4714nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */4715else {4716nb = pad_request(bytes);4717if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {4718check_malloced_chunk(gm, mem, nb);4719goto postaction;4720}4721}47224723if (nb <= gm->dvsize) {4724size_t rsize = gm->dvsize - nb;4725mchunkptr p = gm->dv;4726if (rsize >= MIN_CHUNK_SIZE) { /* split dv */4727mchunkptr r = gm->dv = chunk_plus_offset(p, nb);4728gm->dvsize = rsize;4729set_size_and_pinuse_of_free_chunk(r, rsize);4730set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4731}4732else { /* exhaust dv */4733size_t dvs = gm->dvsize;4734gm->dvsize = 0;4735gm->dv = 0;4736set_inuse_and_pinuse(gm, p, dvs);4737}4738mem = chunk2mem(p);4739check_malloced_chunk(gm, mem, nb);4740goto postaction;4741}47424743else if (nb < gm->topsize) { /* Split top */4744size_t rsize = gm->topsize -= nb;4745mchunkptr p = gm->top;4746mchunkptr r = gm->top = chunk_plus_offset(p, nb);4747r->head = rsize | PINUSE_BIT;4748set_size_and_pinuse_of_inuse_chunk(gm, p, nb);4749mem = chunk2mem(p);4750check_top_chunk(gm, gm->top);4751check_malloced_chunk(gm, mem, nb);4752goto postaction;4753}47544755mem = sys_alloc(gm, nb);47564757postaction:4758POSTACTION(gm);4759#if __EMSCRIPTEN__4760/* XXX Emscripten Tracing API. */4761emscripten_trace_record_allocation(mem, bytes);4762#endif4763return mem;4764}47654766return 0;4767}47684769/* ---------------------------- free --------------------------- */47704771void dlfree(void* mem) {4772/*4773Consolidate freed chunks with preceeding or succeeding bordering4774free chunks, if they exist, and then place in a bin. Intermixed4775with special cases for top, dv, mmapped chunks, and usage errors.4776*/47774778if (mem != 0) {4779#if __EMSCRIPTEN__4780/* XXX Emscripten Tracing API. */4781emscripten_trace_record_free(mem);4782#endif4783mchunkptr p = mem2chunk(mem);4784#if FOOTERS4785mstate fm = get_mstate_for(p);4786if (!ok_magic(fm)) {4787USAGE_ERROR_ACTION(fm, p);4788return;4789}4790#else /* FOOTERS */4791#define fm gm4792#endif /* FOOTERS */4793if (!PREACTION(fm)) {4794check_inuse_chunk(fm, p);4795if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {4796size_t psize = chunksize(p);4797mchunkptr next = chunk_plus_offset(p, psize);4798if (!pinuse(p)) {4799size_t prevsize = p->prev_foot;4800if (is_mmapped(p)) {4801psize += prevsize + MMAP_FOOT_PAD;4802if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)4803fm->footprint -= psize;4804goto postaction;4805}4806else {4807mchunkptr prev = chunk_minus_offset(p, prevsize);4808psize += prevsize;4809p = prev;4810if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */4811if (p != fm->dv) {4812unlink_chunk(fm, p, prevsize);4813}4814else if ((next->head & INUSE_BITS) == INUSE_BITS) {4815fm->dvsize = psize;4816set_free_with_pinuse(p, psize, next);4817goto postaction;4818}4819}4820else4821goto erroraction;4822}4823}48244825if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {4826if (!cinuse(next)) { /* consolidate forward */4827if (next == fm->top) {4828size_t tsize = fm->topsize += psize;4829fm->top = p;4830p->head = tsize | PINUSE_BIT;4831if (p == fm->dv) {4832fm->dv = 0;4833fm->dvsize = 0;4834}4835if (should_trim(fm, tsize))4836sys_trim(fm, 0);4837goto postaction;4838}4839else if (next == fm->dv) {4840size_t dsize = fm->dvsize += psize;4841fm->dv = p;4842set_size_and_pinuse_of_free_chunk(p, dsize);4843goto postaction;4844}4845else {4846size_t nsize = chunksize(next);4847psize += nsize;4848unlink_chunk(fm, next, nsize);4849set_size_and_pinuse_of_free_chunk(p, psize);4850if (p == fm->dv) {4851fm->dvsize = psize;4852goto postaction;4853}4854}4855}4856else4857set_free_with_pinuse(p, psize, next);48584859if (is_small(psize)) {4860insert_small_chunk(fm, p, psize);4861check_free_chunk(fm, p);4862}4863else {4864tchunkptr tp = (tchunkptr)p;4865insert_large_chunk(fm, tp, psize);4866check_free_chunk(fm, p);4867if (--fm->release_checks == 0)4868release_unused_segments(fm);4869}4870goto postaction;4871}4872}4873erroraction:4874USAGE_ERROR_ACTION(fm, p);4875postaction:4876POSTACTION(fm);4877}4878}4879#if !FOOTERS4880#undef fm4881#endif /* FOOTERS */4882}48834884void* dlcalloc(size_t n_elements, size_t elem_size) {4885void* mem;4886size_t req = 0;4887if (n_elements != 0) {4888req = n_elements * elem_size;4889if (((n_elements | elem_size) & ~(size_t)0xffff) &&4890(req / n_elements != elem_size))4891req = MAX_SIZE_T; /* force downstream failure on overflow */4892}4893mem = dlmalloc(req);4894if (mem != 0 && calloc_must_clear(mem2chunk(mem)))4895memset(mem, 0, req);4896return mem;4897}48984899#endif /* !ONLY_MSPACES */49004901/* ------------ Internal support for realloc, memalign, etc -------------- */49024903/* Try to realloc; only in-place unless can_move true */4904static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,4905int can_move) {4906mchunkptr newp = 0;4907size_t oldsize = chunksize(p);4908mchunkptr next = chunk_plus_offset(p, oldsize);4909if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&4910ok_next(p, next) && ok_pinuse(next))) {4911if (is_mmapped(p)) {4912newp = mmap_resize(m, p, nb, can_move);4913}4914else if (oldsize >= nb) { /* already big enough */4915size_t rsize = oldsize - nb;4916if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */4917mchunkptr r = chunk_plus_offset(p, nb);4918set_inuse(m, p, nb);4919set_inuse(m, r, rsize);4920dispose_chunk(m, r, rsize);4921}4922newp = p;4923}4924else if (next == m->top) { /* extend into top */4925if (oldsize + m->topsize > nb) {4926size_t newsize = oldsize + m->topsize;4927size_t newtopsize = newsize - nb;4928mchunkptr newtop = chunk_plus_offset(p, nb);4929set_inuse(m, p, nb);4930newtop->head = newtopsize |PINUSE_BIT;4931m->top = newtop;4932m->topsize = newtopsize;4933newp = p;4934}4935}4936else if (next == m->dv) { /* extend into dv */4937size_t dvs = m->dvsize;4938if (oldsize + dvs >= nb) {4939size_t dsize = oldsize + dvs - nb;4940if (dsize >= MIN_CHUNK_SIZE) {4941mchunkptr r = chunk_plus_offset(p, nb);4942mchunkptr n = chunk_plus_offset(r, dsize);4943set_inuse(m, p, nb);4944set_size_and_pinuse_of_free_chunk(r, dsize);4945clear_pinuse(n);4946m->dvsize = dsize;4947m->dv = r;4948}4949else { /* exhaust dv */4950size_t newsize = oldsize + dvs;4951set_inuse(m, p, newsize);4952m->dvsize = 0;4953m->dv = 0;4954}4955newp = p;4956}4957}4958else if (!cinuse(next)) { /* extend into next free chunk */4959size_t nextsize = chunksize(next);4960if (oldsize + nextsize >= nb) {4961size_t rsize = oldsize + nextsize - nb;4962unlink_chunk(m, next, nextsize);4963if (rsize < MIN_CHUNK_SIZE) {4964size_t newsize = oldsize + nextsize;4965set_inuse(m, p, newsize);4966}4967else {4968mchunkptr r = chunk_plus_offset(p, nb);4969set_inuse(m, p, nb);4970set_inuse(m, r, rsize);4971dispose_chunk(m, r, rsize);4972}4973newp = p;4974}4975}4976}4977else {4978USAGE_ERROR_ACTION(m, chunk2mem(p));4979}4980return newp;4981}49824983static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {4984void* mem = 0;4985if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */4986alignment = MIN_CHUNK_SIZE;4987if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */4988size_t a = MALLOC_ALIGNMENT << 1;4989while (a < alignment) a <<= 1;4990alignment = a;4991}4992if (bytes >= MAX_REQUEST - alignment) {4993if (m != 0) { /* Test isn't needed but avoids compiler warning */4994MALLOC_FAILURE_ACTION;4995}4996}4997else {4998size_t nb = request2size(bytes);4999size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;5000mem = internal_malloc(m, req);5001if (mem != 0) {5002mchunkptr p = mem2chunk(mem);5003if (PREACTION(m))5004return 0;5005if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */5006/*5007Find an aligned spot inside chunk. Since we need to give5008back leading space in a chunk of at least MIN_CHUNK_SIZE, if5009the first calculation places us at a spot with less than5010MIN_CHUNK_SIZE leader, we can move to the next aligned spot.5011We've allocated enough total room so that this is always5012possible.5013*/5014char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -5015SIZE_T_ONE)) &5016-alignment));5017char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?5018br : br+alignment;5019mchunkptr newp = (mchunkptr)pos;5020size_t leadsize = pos - (char*)(p);5021size_t newsize = chunksize(p) - leadsize;50225023if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */5024newp->prev_foot = p->prev_foot + leadsize;5025newp->head = newsize;5026}5027else { /* Otherwise, give back leader, use the rest */5028set_inuse(m, newp, newsize);5029set_inuse(m, p, leadsize);5030dispose_chunk(m, p, leadsize);5031}5032p = newp;5033}50345035/* Give back spare room at the end */5036if (!is_mmapped(p)) {5037size_t size = chunksize(p);5038if (size > nb + MIN_CHUNK_SIZE) {5039size_t remainder_size = size - nb;5040mchunkptr remainder = chunk_plus_offset(p, nb);5041set_inuse(m, p, nb);5042set_inuse(m, remainder, remainder_size);5043dispose_chunk(m, remainder, remainder_size);5044}5045}50465047mem = chunk2mem(p);5048assert (chunksize(p) >= nb);5049assert(((size_t)mem & (alignment - 1)) == 0);5050check_inuse_chunk(m, p);5051POSTACTION(m);5052}5053}5054return mem;5055}50565057/*5058Common support for independent_X routines, handling5059all of the combinations that can result.5060The opts arg has:5061bit 0 set if all elements are same size (using sizes[0])5062bit 1 set if elements should be zeroed5063*/5064static void** ialloc(mstate m,5065size_t n_elements,5066size_t* sizes,5067int opts,5068void* chunks[]) {50695070size_t element_size; /* chunksize of each element, if all same */5071size_t contents_size; /* total size of elements */5072size_t array_size; /* request size of pointer array */5073void* mem; /* malloced aggregate space */5074mchunkptr p; /* corresponding chunk */5075size_t remainder_size; /* remaining bytes while splitting */5076void** marray; /* either "chunks" or malloced ptr array */5077mchunkptr array_chunk; /* chunk for malloced ptr array */5078flag_t was_enabled; /* to disable mmap */5079size_t size;5080size_t i;50815082ensure_initialization();5083/* compute array length, if needed */5084if (chunks != 0) {5085if (n_elements == 0)5086return chunks; /* nothing to do */5087marray = chunks;5088array_size = 0;5089}5090else {5091/* if empty req, must still return chunk representing empty array */5092if (n_elements == 0)5093return (void**)internal_malloc(m, 0);5094marray = 0;5095array_size = request2size(n_elements * (sizeof(void*)));5096}50975098/* compute total element size */5099if (opts & 0x1) { /* all-same-size */5100element_size = request2size(*sizes);5101contents_size = n_elements * element_size;5102}5103else { /* add up all the sizes */5104element_size = 0;5105contents_size = 0;5106for (i = 0; i != n_elements; ++i)5107contents_size += request2size(sizes[i]);5108}51095110size = contents_size + array_size;51115112/*5113Allocate the aggregate chunk. First disable direct-mmapping so5114malloc won't use it, since we would not be able to later5115free/realloc space internal to a segregated mmap region.5116*/5117was_enabled = use_mmap(m);5118disable_mmap(m);5119mem = internal_malloc(m, size - CHUNK_OVERHEAD);5120if (was_enabled)5121enable_mmap(m);5122if (mem == 0)5123return 0;51245125if (PREACTION(m)) return 0;5126p = mem2chunk(mem);5127remainder_size = chunksize(p);51285129assert(!is_mmapped(p));51305131if (opts & 0x2) { /* optionally clear the elements */5132memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);5133}51345135/* If not provided, allocate the pointer array as final part of chunk */5136if (marray == 0) {5137size_t array_chunk_size;5138array_chunk = chunk_plus_offset(p, contents_size);5139array_chunk_size = remainder_size - contents_size;5140marray = (void**) (chunk2mem(array_chunk));5141set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);5142remainder_size = contents_size;5143}51445145/* split out elements */5146for (i = 0; ; ++i) {5147marray[i] = chunk2mem(p);5148if (i != n_elements-1) {5149if (element_size != 0)5150size = element_size;5151else5152size = request2size(sizes[i]);5153remainder_size -= size;5154set_size_and_pinuse_of_inuse_chunk(m, p, size);5155p = chunk_plus_offset(p, size);5156}5157else { /* the final element absorbs any overallocation slop */5158set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);5159break;5160}5161}51625163#if DEBUG5164if (marray != chunks) {5165/* final element must have exactly exhausted chunk */5166if (element_size != 0) {5167assert(remainder_size == element_size);5168}5169else {5170assert(remainder_size == request2size(sizes[i]));5171}5172check_inuse_chunk(m, mem2chunk(marray));5173}5174for (i = 0; i != n_elements; ++i)5175check_inuse_chunk(m, mem2chunk(marray[i]));51765177#endif /* DEBUG */51785179POSTACTION(m);5180return marray;5181}51825183/* Try to free all pointers in the given array.5184Note: this could be made faster, by delaying consolidation,5185at the price of disabling some user integrity checks, We5186still optimize some consolidations by combining adjacent5187chunks before freeing, which will occur often if allocated5188with ialloc or the array is sorted.5189*/5190static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {5191size_t unfreed = 0;5192if (!PREACTION(m)) {5193void** a;5194void** fence = &(array[nelem]);5195for (a = array; a != fence; ++a) {5196void* mem = *a;5197if (mem != 0) {5198mchunkptr p = mem2chunk(mem);5199size_t psize = chunksize(p);5200#if FOOTERS5201if (get_mstate_for(p) != m) {5202++unfreed;5203continue;5204}5205#endif5206check_inuse_chunk(m, p);5207*a = 0;5208if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {5209void ** b = a + 1; /* try to merge with next chunk */5210mchunkptr next = next_chunk(p);5211if (b != fence && *b == chunk2mem(next)) {5212size_t newsize = chunksize(next) + psize;5213set_inuse(m, p, newsize);5214*b = chunk2mem(p);5215}5216else5217dispose_chunk(m, p, psize);5218}5219else {5220CORRUPTION_ERROR_ACTION(m);5221break;5222}5223}5224}5225if (should_trim(m, m->topsize))5226sys_trim(m, 0);5227POSTACTION(m);5228}5229return unfreed;5230}52315232/* Traversal */5233#if MALLOC_INSPECT_ALL5234static void internal_inspect_all(mstate m,5235void(*handler)(void *start,5236void *end,5237size_t used_bytes,5238void* callback_arg),5239void* arg) {5240if (is_initialized(m)) {5241mchunkptr top = m->top;5242msegmentptr s;5243for (s = &m->seg; s != 0; s = s->next) {5244mchunkptr q = align_as_chunk(s->base);5245while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {5246mchunkptr next = next_chunk(q);5247size_t sz = chunksize(q);5248size_t used;5249void* start;5250if (is_inuse(q)) {5251used = sz - CHUNK_OVERHEAD; /* must not be mmapped */5252start = chunk2mem(q);5253}5254else {5255used = 0;5256if (is_small(sz)) { /* offset by possible bookkeeping */5257start = (void*)((char*)q + sizeof(struct malloc_chunk));5258}5259else {5260start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));5261}5262}5263if (start < (void*)next) /* skip if all space is bookkeeping */5264handler(start, next, used, arg);5265if (q == top)5266break;5267q = next;5268}5269}5270}5271}5272#endif /* MALLOC_INSPECT_ALL */52735274/* ------------------ Exported realloc, memalign, etc -------------------- */52755276#if !ONLY_MSPACES52775278void* dlrealloc(void* oldmem, size_t bytes) {5279void* mem = 0;5280if (oldmem == 0) {5281mem = dlmalloc(bytes);5282}5283else if (bytes >= MAX_REQUEST) {5284MALLOC_FAILURE_ACTION;5285}5286#ifdef REALLOC_ZERO_BYTES_FREES5287else if (bytes == 0) {5288dlfree(oldmem);5289}5290#endif /* REALLOC_ZERO_BYTES_FREES */5291else {5292size_t nb = request2size(bytes);5293mchunkptr oldp = mem2chunk(oldmem);5294#if ! FOOTERS5295mstate m = gm;5296#else /* FOOTERS */5297mstate m = get_mstate_for(oldp);5298if (!ok_magic(m)) {5299USAGE_ERROR_ACTION(m, oldmem);5300return 0;5301}5302#endif /* FOOTERS */5303if (!PREACTION(m)) {5304mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);5305POSTACTION(m);5306if (newp != 0) {5307check_inuse_chunk(m, newp);5308mem = chunk2mem(newp);5309#if __EMSCRIPTEN__5310/* XXX Emscripten Tracing API. */5311emscripten_trace_record_reallocation(oldmem, mem, bytes);5312#endif5313}5314else {5315mem = internal_malloc(m, bytes);5316if (mem != 0) {5317size_t oc = chunksize(oldp) - overhead_for(oldp);5318memcpy(mem, oldmem, (oc < bytes)? oc : bytes);5319internal_free(m, oldmem);5320}5321}5322}5323}5324return mem;5325}53265327void* dlrealloc_in_place(void* oldmem, size_t bytes) {5328void* mem = 0;5329if (oldmem != 0) {5330if (bytes >= MAX_REQUEST) {5331MALLOC_FAILURE_ACTION;5332}5333else {5334size_t nb = request2size(bytes);5335mchunkptr oldp = mem2chunk(oldmem);5336#if ! FOOTERS5337mstate m = gm;5338#else /* FOOTERS */5339mstate m = get_mstate_for(oldp);5340if (!ok_magic(m)) {5341USAGE_ERROR_ACTION(m, oldmem);5342return 0;5343}5344#endif /* FOOTERS */5345if (!PREACTION(m)) {5346mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);5347POSTACTION(m);5348if (newp == oldp) {5349check_inuse_chunk(m, newp);5350mem = oldmem;5351}5352}5353}5354}5355#if __EMSCRIPTEN__5356/* XXX Emscripten Tracing API. */5357emscripten_trace_record_reallocation(oldmem, mem, bytes);5358#endif5359return mem;5360}53615362void* dlmemalign(size_t alignment, size_t bytes) {5363if (alignment <= MALLOC_ALIGNMENT) {5364return dlmalloc(bytes);5365}5366return internal_memalign(gm, alignment, bytes);5367}53685369int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {5370void* mem = 0;5371if (alignment == MALLOC_ALIGNMENT)5372mem = dlmalloc(bytes);5373else {5374size_t d = alignment / sizeof(void*);5375size_t r = alignment % sizeof(void*);5376if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)5377return EINVAL;5378else if (bytes <= MAX_REQUEST - alignment) {5379if (alignment < MIN_CHUNK_SIZE)5380alignment = MIN_CHUNK_SIZE;5381mem = internal_memalign(gm, alignment, bytes);5382}5383}5384if (mem == 0)5385return ENOMEM;5386else {5387*pp = mem;5388return 0;5389}5390}53915392void* dlvalloc(size_t bytes) {5393size_t pagesz;5394ensure_initialization();5395pagesz = mparams.page_size;5396return dlmemalign(pagesz, bytes);5397}53985399void* dlpvalloc(size_t bytes) {5400size_t pagesz;5401ensure_initialization();5402pagesz = mparams.page_size;5403return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));5404}54055406void** dlindependent_calloc(size_t n_elements, size_t elem_size,5407void* chunks[]) {5408size_t sz = elem_size; /* serves as 1-element array */5409return ialloc(gm, n_elements, &sz, 3, chunks);5410}54115412void** dlindependent_comalloc(size_t n_elements, size_t sizes[],5413void* chunks[]) {5414return ialloc(gm, n_elements, sizes, 0, chunks);5415}54165417size_t dlbulk_free(void* array[], size_t nelem) {5418return internal_bulk_free(gm, array, nelem);5419}54205421#if MALLOC_INSPECT_ALL5422void dlmalloc_inspect_all(void(*handler)(void *start,5423void *end,5424size_t used_bytes,5425void* callback_arg),5426void* arg) {5427ensure_initialization();5428if (!PREACTION(gm)) {5429internal_inspect_all(gm, handler, arg);5430POSTACTION(gm);5431}5432}5433#endif /* MALLOC_INSPECT_ALL */54345435int dlmalloc_trim(size_t pad) {5436int result = 0;5437ensure_initialization();5438if (!PREACTION(gm)) {5439result = sys_trim(gm, pad);5440POSTACTION(gm);5441}5442return result;5443}54445445size_t dlmalloc_footprint(void) {5446return gm->footprint;5447}54485449size_t dlmalloc_max_footprint(void) {5450return gm->max_footprint;5451}54525453size_t dlmalloc_footprint_limit(void) {5454size_t maf = gm->footprint_limit;5455return maf == 0 ? MAX_SIZE_T : maf;5456}54575458size_t dlmalloc_set_footprint_limit(size_t bytes) {5459size_t result; /* invert sense of 0 */5460if (bytes == 0)5461result = granularity_align(1); /* Use minimal size */5462if (bytes == MAX_SIZE_T)5463result = 0; /* disable */5464else5465result = granularity_align(bytes);5466return gm->footprint_limit = result;5467}54685469#if !NO_MALLINFO5470struct mallinfo dlmallinfo(void) {5471return internal_mallinfo(gm);5472}5473#endif /* NO_MALLINFO */54745475#if !NO_MALLOC_STATS5476void dlmalloc_stats() {5477internal_malloc_stats(gm);5478}5479#endif /* NO_MALLOC_STATS */54805481int dlmallopt(int param_number, int value) {5482return change_mparam(param_number, value);5483}54845485size_t dlmalloc_usable_size(void* mem) {5486if (mem != 0) {5487mchunkptr p = mem2chunk(mem);5488if (is_inuse(p))5489return chunksize(p) - overhead_for(p);5490}5491return 0;5492}54935494#endif /* !ONLY_MSPACES */54955496/* ----------------------------- user mspaces ---------------------------- */54975498#if MSPACES54995500static mstate init_user_mstate(char* tbase, size_t tsize) {5501size_t msize = pad_request(sizeof(struct malloc_state));5502mchunkptr mn;5503mchunkptr msp = align_as_chunk(tbase);5504mstate m = (mstate)(chunk2mem(msp));5505memset(m, 0, msize);5506(void)INITIAL_LOCK(&m->mutex);5507msp->head = (msize|INUSE_BITS);5508m->seg.base = m->least_addr = tbase;5509m->seg.size = m->footprint = m->max_footprint = tsize;5510m->magic = mparams.magic;5511m->release_checks = MAX_RELEASE_CHECK_RATE;5512m->mflags = mparams.default_mflags;5513m->extp = 0;5514m->exts = 0;5515disable_contiguous(m);5516init_bins(m);5517mn = next_chunk(mem2chunk(m));5518init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);5519check_top_chunk(m, m->top);5520return m;5521}55225523mspace create_mspace(size_t capacity, int locked) {5524mstate m = 0;5525size_t msize;5526ensure_initialization();5527msize = pad_request(sizeof(struct malloc_state));5528if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {5529size_t rs = ((capacity == 0)? mparams.granularity :5530(capacity + TOP_FOOT_SIZE + msize));5531size_t tsize = granularity_align(rs);5532char* tbase = (char*)(CALL_MMAP(tsize));5533if (tbase != CMFAIL) {5534m = init_user_mstate(tbase, tsize);5535m->seg.sflags = USE_MMAP_BIT;5536set_lock(m, locked);5537}5538}5539return (mspace)m;5540}55415542mspace create_mspace_with_base(void* base, size_t capacity, int locked) {5543mstate m = 0;5544size_t msize;5545ensure_initialization();5546msize = pad_request(sizeof(struct malloc_state));5547if (capacity > msize + TOP_FOOT_SIZE &&5548capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {5549m = init_user_mstate((char*)base, capacity);5550m->seg.sflags = EXTERN_BIT;5551set_lock(m, locked);5552}5553return (mspace)m;5554}55555556int mspace_track_large_chunks(mspace msp, int enable) {5557int ret = 0;5558mstate ms = (mstate)msp;5559if (!PREACTION(ms)) {5560if (!use_mmap(ms)) {5561ret = 1;5562}5563if (!enable) {5564enable_mmap(ms);5565} else {5566disable_mmap(ms);5567}5568POSTACTION(ms);5569}5570return ret;5571}55725573size_t destroy_mspace(mspace msp) {5574size_t freed = 0;5575mstate ms = (mstate)msp;5576if (ok_magic(ms)) {5577msegmentptr sp = &ms->seg;5578(void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */5579while (sp != 0) {5580char* base = sp->base;5581size_t size = sp->size;5582flag_t flag = sp->sflags;5583(void)base; /* placate people compiling -Wunused-variable */5584sp = sp->next;5585if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&5586CALL_MUNMAP(base, size) == 0)5587freed += size;5588}5589}5590else {5591USAGE_ERROR_ACTION(ms,ms);5592}5593return freed;5594}55955596/*5597mspace versions of routines are near-clones of the global5598versions. This is not so nice but better than the alternatives.5599*/56005601void* mspace_malloc(mspace msp, size_t bytes) {5602mstate ms = (mstate)msp;5603if (!ok_magic(ms)) {5604USAGE_ERROR_ACTION(ms,ms);5605return 0;5606}5607if (!PREACTION(ms)) {5608void* mem;5609size_t nb;5610if (bytes <= MAX_SMALL_REQUEST) {5611bindex_t idx;5612binmap_t smallbits;5613nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);5614idx = small_index(nb);5615smallbits = ms->smallmap >> idx;56165617if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */5618mchunkptr b, p;5619idx += ~smallbits & 1; /* Uses next bin if idx empty */5620b = smallbin_at(ms, idx);5621p = b->fd;5622assert(chunksize(p) == small_index2size(idx));5623unlink_first_small_chunk(ms, b, p, idx);5624set_inuse_and_pinuse(ms, p, small_index2size(idx));5625mem = chunk2mem(p);5626check_malloced_chunk(ms, mem, nb);5627goto postaction;5628}56295630else if (nb > ms->dvsize) {5631if (smallbits != 0) { /* Use chunk in next nonempty smallbin */5632mchunkptr b, p, r;5633size_t rsize;5634bindex_t i;5635binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));5636binmap_t leastbit = least_bit(leftbits);5637compute_bit2idx(leastbit, i);5638b = smallbin_at(ms, i);5639p = b->fd;5640assert(chunksize(p) == small_index2size(i));5641unlink_first_small_chunk(ms, b, p, i);5642rsize = small_index2size(i) - nb;5643/* Fit here cannot be remainderless if 4byte sizes */5644if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)5645set_inuse_and_pinuse(ms, p, small_index2size(i));5646else {5647set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5648r = chunk_plus_offset(p, nb);5649set_size_and_pinuse_of_free_chunk(r, rsize);5650replace_dv(ms, r, rsize);5651}5652mem = chunk2mem(p);5653check_malloced_chunk(ms, mem, nb);5654goto postaction;5655}56565657else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {5658check_malloced_chunk(ms, mem, nb);5659goto postaction;5660}5661}5662}5663else if (bytes >= MAX_REQUEST)5664nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */5665else {5666nb = pad_request(bytes);5667if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {5668check_malloced_chunk(ms, mem, nb);5669goto postaction;5670}5671}56725673if (nb <= ms->dvsize) {5674size_t rsize = ms->dvsize - nb;5675mchunkptr p = ms->dv;5676if (rsize >= MIN_CHUNK_SIZE) { /* split dv */5677mchunkptr r = ms->dv = chunk_plus_offset(p, nb);5678ms->dvsize = rsize;5679set_size_and_pinuse_of_free_chunk(r, rsize);5680set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5681}5682else { /* exhaust dv */5683size_t dvs = ms->dvsize;5684ms->dvsize = 0;5685ms->dv = 0;5686set_inuse_and_pinuse(ms, p, dvs);5687}5688mem = chunk2mem(p);5689check_malloced_chunk(ms, mem, nb);5690goto postaction;5691}56925693else if (nb < ms->topsize) { /* Split top */5694size_t rsize = ms->topsize -= nb;5695mchunkptr p = ms->top;5696mchunkptr r = ms->top = chunk_plus_offset(p, nb);5697r->head = rsize | PINUSE_BIT;5698set_size_and_pinuse_of_inuse_chunk(ms, p, nb);5699mem = chunk2mem(p);5700check_top_chunk(ms, ms->top);5701check_malloced_chunk(ms, mem, nb);5702goto postaction;5703}57045705mem = sys_alloc(ms, nb);57065707postaction:5708POSTACTION(ms);5709return mem;5710}57115712return 0;5713}57145715void mspace_free(mspace msp, void* mem) {5716if (mem != 0) {5717mchunkptr p = mem2chunk(mem);5718#if FOOTERS5719mstate fm = get_mstate_for(p);5720(void)msp; /* placate people compiling -Wunused */5721#else /* FOOTERS */5722mstate fm = (mstate)msp;5723#endif /* FOOTERS */5724if (!ok_magic(fm)) {5725USAGE_ERROR_ACTION(fm, p);5726return;5727}5728if (!PREACTION(fm)) {5729check_inuse_chunk(fm, p);5730if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {5731size_t psize = chunksize(p);5732mchunkptr next = chunk_plus_offset(p, psize);5733if (!pinuse(p)) {5734size_t prevsize = p->prev_foot;5735if (is_mmapped(p)) {5736psize += prevsize + MMAP_FOOT_PAD;5737if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)5738fm->footprint -= psize;5739goto postaction;5740}5741else {5742mchunkptr prev = chunk_minus_offset(p, prevsize);5743psize += prevsize;5744p = prev;5745if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */5746if (p != fm->dv) {5747unlink_chunk(fm, p, prevsize);5748}5749else if ((next->head & INUSE_BITS) == INUSE_BITS) {5750fm->dvsize = psize;5751set_free_with_pinuse(p, psize, next);5752goto postaction;5753}5754}5755else5756goto erroraction;5757}5758}57595760if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {5761if (!cinuse(next)) { /* consolidate forward */5762if (next == fm->top) {5763size_t tsize = fm->topsize += psize;5764fm->top = p;5765p->head = tsize | PINUSE_BIT;5766if (p == fm->dv) {5767fm->dv = 0;5768fm->dvsize = 0;5769}5770if (should_trim(fm, tsize))5771sys_trim(fm, 0);5772goto postaction;5773}5774else if (next == fm->dv) {5775size_t dsize = fm->dvsize += psize;5776fm->dv = p;5777set_size_and_pinuse_of_free_chunk(p, dsize);5778goto postaction;5779}5780else {5781size_t nsize = chunksize(next);5782psize += nsize;5783unlink_chunk(fm, next, nsize);5784set_size_and_pinuse_of_free_chunk(p, psize);5785if (p == fm->dv) {5786fm->dvsize = psize;5787goto postaction;5788}5789}5790}5791else5792set_free_with_pinuse(p, psize, next);57935794if (is_small(psize)) {5795insert_small_chunk(fm, p, psize);5796check_free_chunk(fm, p);5797}5798else {5799tchunkptr tp = (tchunkptr)p;5800insert_large_chunk(fm, tp, psize);5801check_free_chunk(fm, p);5802if (--fm->release_checks == 0)5803release_unused_segments(fm);5804}5805goto postaction;5806}5807}5808erroraction:5809USAGE_ERROR_ACTION(fm, p);5810postaction:5811POSTACTION(fm);5812}5813}5814}58155816void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {5817void* mem;5818size_t req = 0;5819mstate ms = (mstate)msp;5820if (!ok_magic(ms)) {5821USAGE_ERROR_ACTION(ms,ms);5822return 0;5823}5824if (n_elements != 0) {5825req = n_elements * elem_size;5826if (((n_elements | elem_size) & ~(size_t)0xffff) &&5827(req / n_elements != elem_size))5828req = MAX_SIZE_T; /* force downstream failure on overflow */5829}5830mem = internal_malloc(ms, req);5831if (mem != 0 && calloc_must_clear(mem2chunk(mem)))5832memset(mem, 0, req);5833return mem;5834}58355836void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {5837void* mem = 0;5838if (oldmem == 0) {5839mem = mspace_malloc(msp, bytes);5840}5841else if (bytes >= MAX_REQUEST) {5842MALLOC_FAILURE_ACTION;5843}5844#ifdef REALLOC_ZERO_BYTES_FREES5845else if (bytes == 0) {5846mspace_free(msp, oldmem);5847}5848#endif /* REALLOC_ZERO_BYTES_FREES */5849else {5850size_t nb = request2size(bytes);5851mchunkptr oldp = mem2chunk(oldmem);5852#if ! FOOTERS5853mstate m = (mstate)msp;5854#else /* FOOTERS */5855mstate m = get_mstate_for(oldp);5856if (!ok_magic(m)) {5857USAGE_ERROR_ACTION(m, oldmem);5858return 0;5859}5860#endif /* FOOTERS */5861if (!PREACTION(m)) {5862mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);5863POSTACTION(m);5864if (newp != 0) {5865check_inuse_chunk(m, newp);5866mem = chunk2mem(newp);5867}5868else {5869mem = mspace_malloc(m, bytes);5870if (mem != 0) {5871size_t oc = chunksize(oldp) - overhead_for(oldp);5872memcpy(mem, oldmem, (oc < bytes)? oc : bytes);5873mspace_free(m, oldmem);5874}5875}5876}5877}5878return mem;5879}58805881void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {5882void* mem = 0;5883if (oldmem != 0) {5884if (bytes >= MAX_REQUEST) {5885MALLOC_FAILURE_ACTION;5886}5887else {5888size_t nb = request2size(bytes);5889mchunkptr oldp = mem2chunk(oldmem);5890#if ! FOOTERS5891mstate m = (mstate)msp;5892#else /* FOOTERS */5893mstate m = get_mstate_for(oldp);5894(void)msp; /* placate people compiling -Wunused */5895if (!ok_magic(m)) {5896USAGE_ERROR_ACTION(m, oldmem);5897return 0;5898}5899#endif /* FOOTERS */5900if (!PREACTION(m)) {5901mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);5902POSTACTION(m);5903if (newp == oldp) {5904check_inuse_chunk(m, newp);5905mem = oldmem;5906}5907}5908}5909}5910return mem;5911}59125913void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {5914mstate ms = (mstate)msp;5915if (!ok_magic(ms)) {5916USAGE_ERROR_ACTION(ms,ms);5917return 0;5918}5919if (alignment <= MALLOC_ALIGNMENT)5920return mspace_malloc(msp, bytes);5921return internal_memalign(ms, alignment, bytes);5922}59235924void** mspace_independent_calloc(mspace msp, size_t n_elements,5925size_t elem_size, void* chunks[]) {5926size_t sz = elem_size; /* serves as 1-element array */5927mstate ms = (mstate)msp;5928if (!ok_magic(ms)) {5929USAGE_ERROR_ACTION(ms,ms);5930return 0;5931}5932return ialloc(ms, n_elements, &sz, 3, chunks);5933}59345935void** mspace_independent_comalloc(mspace msp, size_t n_elements,5936size_t sizes[], void* chunks[]) {5937mstate ms = (mstate)msp;5938if (!ok_magic(ms)) {5939USAGE_ERROR_ACTION(ms,ms);5940return 0;5941}5942return ialloc(ms, n_elements, sizes, 0, chunks);5943}59445945size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {5946return internal_bulk_free((mstate)msp, array, nelem);5947}59485949#if MALLOC_INSPECT_ALL5950void mspace_inspect_all(mspace msp,5951void(*handler)(void *start,5952void *end,5953size_t used_bytes,5954void* callback_arg),5955void* arg) {5956mstate ms = (mstate)msp;5957if (ok_magic(ms)) {5958if (!PREACTION(ms)) {5959internal_inspect_all(ms, handler, arg);5960POSTACTION(ms);5961}5962}5963else {5964USAGE_ERROR_ACTION(ms,ms);5965}5966}5967#endif /* MALLOC_INSPECT_ALL */59685969int mspace_trim(mspace msp, size_t pad) {5970int result = 0;5971mstate ms = (mstate)msp;5972if (ok_magic(ms)) {5973if (!PREACTION(ms)) {5974result = sys_trim(ms, pad);5975POSTACTION(ms);5976}5977}5978else {5979USAGE_ERROR_ACTION(ms,ms);5980}5981return result;5982}59835984#if !NO_MALLOC_STATS5985void mspace_malloc_stats(mspace msp) {5986mstate ms = (mstate)msp;5987if (ok_magic(ms)) {5988internal_malloc_stats(ms);5989}5990else {5991USAGE_ERROR_ACTION(ms,ms);5992}5993}5994#endif /* NO_MALLOC_STATS */59955996size_t mspace_footprint(mspace msp) {5997size_t result = 0;5998mstate ms = (mstate)msp;5999if (ok_magic(ms)) {6000result = ms->footprint;6001}6002else {6003USAGE_ERROR_ACTION(ms,ms);6004}6005return result;6006}60076008size_t mspace_max_footprint(mspace msp) {6009size_t result = 0;6010mstate ms = (mstate)msp;6011if (ok_magic(ms)) {6012result = ms->max_footprint;6013}6014else {6015USAGE_ERROR_ACTION(ms,ms);6016}6017return result;6018}60196020size_t mspace_footprint_limit(mspace msp) {6021size_t result = 0;6022mstate ms = (mstate)msp;6023if (ok_magic(ms)) {6024size_t maf = ms->footprint_limit;6025result = (maf == 0) ? MAX_SIZE_T : maf;6026}6027else {6028USAGE_ERROR_ACTION(ms,ms);6029}6030return result;6031}60326033size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {6034size_t result = 0;6035mstate ms = (mstate)msp;6036if (ok_magic(ms)) {6037if (bytes == 0)6038result = granularity_align(1); /* Use minimal size */6039if (bytes == MAX_SIZE_T)6040result = 0; /* disable */6041else6042result = granularity_align(bytes);6043ms->footprint_limit = result;6044}6045else {6046USAGE_ERROR_ACTION(ms,ms);6047}6048return result;6049}60506051#if !NO_MALLINFO6052struct mallinfo mspace_mallinfo(mspace msp) {6053mstate ms = (mstate)msp;6054if (!ok_magic(ms)) {6055USAGE_ERROR_ACTION(ms,ms);6056}6057return internal_mallinfo(ms);6058}6059#endif /* NO_MALLINFO */60606061size_t mspace_usable_size(const void* mem) {6062if (mem != 0) {6063mchunkptr p = mem2chunk(mem);6064if (is_inuse(p))6065return chunksize(p) - overhead_for(p);6066}6067return 0;6068}60696070int mspace_mallopt(int param_number, int value) {6071return change_mparam(param_number, value);6072}60736074#endif /* MSPACES */60756076// Export malloc and free as duplicate names emscripten_builtin_malloc and6077// emscripten_builtin_free so that applications can replace malloc and free6078// in their code, and make those replacements refer to the original dlmalloc6079// and dlfree from this file.6080// This allows an easy mechanism for hooking into memory allocation.6081#if defined(__EMSCRIPTEN__) && !ONLY_MSPACES6082extern __typeof(malloc) emscripten_builtin_malloc __attribute__((alias("dlmalloc")));6083extern __typeof(realloc) emscripten_builtin_realloc __attribute__((alias("dlrealloc")));6084extern __typeof(calloc) emscripten_builtin_calloc __attribute__((alias("dlcalloc")));6085extern __typeof(free) emscripten_builtin_free __attribute__((alias("dlfree")));6086extern __typeof(memalign) emscripten_builtin_memalign __attribute__((alias("dlmemalign")));6087#endif60886089/* -------------------- Alternative MORECORE functions ------------------- */60906091/*6092Guidelines for creating a custom version of MORECORE:60936094* For best performance, MORECORE should allocate in multiples of pagesize.6095* MORECORE may allocate more memory than requested. (Or even less,6096but this will usually result in a malloc failure.)6097* MORECORE must not allocate memory when given argument zero, but6098instead return one past the end address of memory from previous6099nonzero call.6100* For best performance, consecutive calls to MORECORE with positive6101arguments should return increasing addresses, indicating that6102space has been contiguously extended.6103* Even though consecutive calls to MORECORE need not return contiguous6104addresses, it must be OK for malloc'ed chunks to span multiple6105regions in those cases where they do happen to be contiguous.6106* MORECORE need not handle negative arguments -- it may instead6107just return MFAIL when given negative arguments.6108Negative arguments are always multiples of pagesize. MORECORE6109must not misinterpret negative args as large positive unsigned6110args unless UNSIGNED_MORECORE is defined. You can suppress all such calls6111from even occurring by defining MORECORE_CANNOT_TRIM,61126113As an example alternative MORECORE, here is a custom allocator6114kindly contributed for pre-OSX macOS. It uses virtually but not6115necessarily physically contiguous non-paged memory (locked in,6116present and won't get swapped out). You can use it by uncommenting6117this section, adding some #includes, and setting up the appropriate6118defines above:61196120#define MORECORE osMoreCore61216122There is also a shutdown routine that should somehow be called for6123cleanup upon program exit.61246125#define MAX_POOL_ENTRIES 1006126#define MINIMUM_MORECORE_SIZE (64 * 1024U)6127static int next_os_pool;6128void *our_os_pools[MAX_POOL_ENTRIES];61296130void *osMoreCore(int size)6131{6132void *ptr = 0;6133static void *sbrk_top = 0;61346135if (size > 0)6136{6137if (size < MINIMUM_MORECORE_SIZE)6138size = MINIMUM_MORECORE_SIZE;6139if (CurrentExecutionLevel() == kTaskLevel)6140ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);6141if (ptr == 0)6142{6143return (void *) MFAIL;6144}6145// save ptrs so they can be freed during cleanup6146our_os_pools[next_os_pool] = ptr;6147next_os_pool++;6148ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);6149sbrk_top = (char *) ptr + size;6150return ptr;6151}6152else if (size < 0)6153{6154// we don't currently support shrink behavior6155return (void *) MFAIL;6156}6157else6158{6159return sbrk_top;6160}6161}61626163// cleanup any allocated memory pools6164// called as last thing before shutting down driver61656166void osCleanupMem(void)6167{6168void **ptr;61696170for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)6171if (*ptr)6172{6173PoolDeallocate(*ptr);6174*ptr = 0;6175}6176}61776178*/617961806181/* -----------------------------------------------------------------------6182History:6183v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea6184* fix bad comparison in dlposix_memalign6185* don't reuse adjusted asize in sys_alloc6186* add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion6187* reduce compiler warnings -- thanks to all who reported/suggested these61886189v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)6190* Always perform unlink checks unless INSECURE6191* Add posix_memalign.6192* Improve realloc to expand in more cases; expose realloc_in_place.6193Thanks to Peter Buhr for the suggestion.6194* Add footprint_limit, inspect_all, bulk_free. Thanks6195to Barry Hayes and others for the suggestions.6196* Internal refactorings to avoid calls while holding locks6197* Use non-reentrant locks by default. Thanks to Roland McGrath6198for the suggestion.6199* Small fixes to mspace_destroy, reset_on_error.6200* Various configuration extensions/changes. Thanks6201to all who contributed these.62026203V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)6204* Update Creative Commons URL62056206V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)6207* Use zeros instead of prev foot for is_mmapped6208* Add mspace_track_large_chunks; thanks to Jean Brouwers6209* Fix set_inuse in internal_realloc; thanks to Jean Brouwers6210* Fix insufficient sys_alloc padding when using 16byte alignment6211* Fix bad error check in mspace_footprint6212* Adaptations for ptmalloc; thanks to Wolfram Gloger.6213* Reentrant spin locks; thanks to Earl Chew and others6214* Win32 improvements; thanks to Niall Douglas and Earl Chew6215* Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options6216* Extension hook in malloc_state6217* Various small adjustments to reduce warnings on some compilers6218* Various configuration extensions/changes for more platforms. Thanks6219to all who contributed these.62206221V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)6222* Add max_footprint functions6223* Ensure all appropriate literals are size_t6224* Fix conditional compilation problem for some #define settings6225* Avoid concatenating segments with the one provided6226in create_mspace_with_base6227* Rename some variables to avoid compiler shadowing warnings6228* Use explicit lock initialization.6229* Better handling of sbrk interference.6230* Simplify and fix segment insertion, trimming and mspace_destroy6231* Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x6232* Thanks especially to Dennis Flanagan for help on these.62336234V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)6235* Fix memalign brace error.62366237V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)6238* Fix improper #endif nesting in C++6239* Add explicit casts needed for C++62406241V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)6242* Use trees for large bins6243* Support mspaces6244* Use segments to unify sbrk-based and mmap-based system allocation,6245removing need for emulation on most platforms without sbrk.6246* Default safety checks6247* Optional footer checks. Thanks to William Robertson for the idea.6248* Internal code refactoring6249* Incorporate suggestions and platform-specific changes.6250Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,6251Aaron Bachmann, Emery Berger, and others.6252* Speed up non-fastbin processing enough to remove fastbins.6253* Remove useless cfree() to avoid conflicts with other apps.6254* Remove internal memcpy, memset. Compilers handle builtins better.6255* Remove some options that no one ever used and rename others.62566257V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)6258* Fix malloc_state bitmap array misdeclaration62596260V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)6261* Allow tuning of FIRST_SORTED_BIN_SIZE6262* Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.6263* Better detection and support for non-contiguousness of MORECORE.6264Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger6265* Bypass most of malloc if no frees. Thanks To Emery Berger.6266* Fix freeing of old top non-contiguous chunk im sysmalloc.6267* Raised default trim and map thresholds to 256K.6268* Fix mmap-related #defines. Thanks to Lubos Lunak.6269* Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.6270* Branch-free bin calculation6271* Default trim and mmap thresholds now 256K.62726273V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)6274* Introduce independent_comalloc and independent_calloc.6275Thanks to Michael Pachos for motivation and help.6276* Make optional .h file available6277* Allow > 2GB requests on 32bit systems.6278* new WIN32 sbrk, mmap, munmap, lock code from <[email protected]>.6279Thanks also to Andreas Mueller <a.mueller at paradatec.de>,6280and Anonymous.6281* Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for6282helping test this.)6283* memalign: check alignment arg6284* realloc: don't try to shift chunks backwards, since this6285leads to more fragmentation in some programs and doesn't6286seem to help in any others.6287* Collect all cases in malloc requiring system memory into sysmalloc6288* Use mmap as backup to sbrk6289* Place all internal state in malloc_state6290* Introduce fastbins (although similar to 2.5.1)6291* Many minor tunings and cosmetic improvements6292* Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK6293* Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS6294Thanks to Tony E. Bennett <[email protected]> and others.6295* Include errno.h to support default failure action.62966297V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)6298* return null for negative arguments6299* Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>6300* Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'6301(e.g. WIN32 platforms)6302* Cleanup header file inclusion for WIN32 platforms6303* Cleanup code to avoid Microsoft Visual C++ compiler complaints6304* Add 'USE_DL_PREFIX' to quickly allow co-existence with existing6305memory allocation routines6306* Set 'malloc_getpagesize' for WIN32 platforms (needs more work)6307* Use 'assert' rather than 'ASSERT' in WIN32 code to conform to6308usage of 'assert' in non-WIN32 code6309* Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to6310avoid infinite loop6311* Always call 'fREe()' rather than 'free()'63126313V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)6314* Fixed ordering problem with boundary-stamping63156316V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)6317* Added pvalloc, as recommended by H.J. Liu6318* Added 64bit pointer support mainly from Wolfram Gloger6319* Added anonymously donated WIN32 sbrk emulation6320* Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen6321* malloc_extend_top: fix mask error that caused wastage after6322foreign sbrks6323* Add linux mremap support code from HJ Liu63246325V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)6326* Integrated most documentation with the code.6327* Add support for mmap, with help from6328Wolfram Gloger ([email protected]).6329* Use last_remainder in more cases.6330* Pack bins using idea from [email protected]6331* Use ordered bins instead of best-fit threshhold6332* Eliminate block-local decls to simplify tracing and debugging.6333* Support another case of realloc via move into top6334* Fix error occuring when initial sbrk_base not word-aligned.6335* Rely on page size for units instead of SBRK_UNIT to6336avoid surprises about sbrk alignment conventions.6337* Add mallinfo, mallopt. Thanks to Raymond Nijssen6338([email protected]) for the suggestion.6339* Add `pad' argument to malloc_trim and top_pad mallopt parameter.6340* More precautions for cases where other routines call sbrk,6341courtesy of Wolfram Gloger ([email protected]).6342* Added macros etc., allowing use in linux libc from6343H.J. Lu ([email protected])6344* Inverted this history list63456346V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)6347* Re-tuned and fixed to behave more nicely with V2.6.0 changes.6348* Removed all preallocation code since under current scheme6349the work required to undo bad preallocations exceeds6350the work saved in good cases for most test programs.6351* No longer use return list or unconsolidated bins since6352no scheme using them consistently outperforms those that don't6353given above changes.6354* Use best fit for very large chunks to prevent some worst-cases.6355* Added some support for debugging63566357V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)6358* Removed footers when chunks are in use. Thanks to6359Paul Wilson ([email protected]) for the suggestion.63606361V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)6362* Added malloc_trim, with help from Wolfram Gloger6363([email protected]).63646365V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)63666367V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)6368* realloc: try to expand in both directions6369* malloc: swap order of clean-bin strategy;6370* realloc: only conditionally expand backwards6371* Try not to scavenge used bins6372* Use bin counts as a guide to preallocation6373* Occasionally bin return list chunks in first scan6374* Add a few optimizations from [email protected]63756376V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)6377* faster bin computation & slightly different binning6378* merged all consolidations to one part of malloc proper6379(eliminating old malloc_find_space & malloc_clean_bin)6380* Scan 2 returns chunks (not just 1)6381* Propagate failure in realloc if malloc returns 06382* Add stuff to allow compilation on non-ANSI compilers6383from [email protected]63846385V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)6386* removed potential for odd address access in prev_chunk6387* removed dependency on getpagesize.h6388* misc cosmetics and a bit more internal documentation6389* anticosmetics: mangled names in macros to evade debugger strangeness6390* tested on sparc, hp-700, dec-mips, rs60006391with gcc & native cc (hp, dec only) allowing6392Detlefs & Zorn comparison study (in SIGPLAN Notices.)63936394Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)6395* Based loosely on libg++-1.2X malloc. (It retains some of the overall6396structure of old version, but most details differ.)63976398*/639964006401