Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
MorsGames
GitHub Repository: MorsGames/sm64plus
Path: blob/master/src/pc/dlmalloc.c
7857 views
1
// Windows has terrible malloc/free performance, so use dlmalloc
2
// instead. This makes malloc/free time per frame go from order of
3
// milliseconds to tens of microseconds.
4
#ifdef _WIN32
5
#include <errno.h>
6
#define FORCEINLINE // define this to nothing to make gcc happy
7
#define USE_LOCKS 1
8
9
/*
10
This is a version (aka dlmalloc) of malloc/free/realloc written by
11
Doug Lea and released to the public domain, as explained at
12
http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
13
comments, complaints, performance data, etc to [email protected]
14
15
* Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
16
Note: There may be an updated version of this malloc obtainable at
17
ftp://gee.cs.oswego.edu/pub/misc/malloc.c
18
Check before installing!
19
20
* Quickstart
21
22
This library is all in one file to simplify the most common usage:
23
ftp it, compile it (-O3), and link it into another program. All of
24
the compile-time options default to reasonable values for use on
25
most platforms. You might later want to step through various
26
compile-time and dynamic tuning options.
27
28
For convenience, an include file for code using this malloc is at:
29
ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
30
You don't really need this .h file unless you call functions not
31
defined in your system include files. The .h file contains only the
32
excerpts from this file needed for using this malloc on ANSI C/C++
33
systems, so long as you haven't changed compile-time options about
34
naming and tuning parameters. If you do, then you can create your
35
own malloc.h that does include all settings by cutting at the point
36
indicated below. Note that you may already by default be using a C
37
library containing a malloc that is based on some version of this
38
malloc (for example in linux). You might still want to use the one
39
in this file to customize settings or to avoid overheads associated
40
with library versions.
41
42
* Vital statistics:
43
44
Supported pointer/size_t representation: 4 or 8 bytes
45
size_t MUST be an unsigned type of the same width as
46
pointers. (If you are using an ancient system that declares
47
size_t as a signed type, or need it to be a different width
48
than pointers, you can use a previous release of this malloc
49
(e.g. 2.7.2) supporting these.)
50
51
Alignment: 8 bytes (minimum)
52
This suffices for nearly all current machines and C compilers.
53
However, you can define MALLOC_ALIGNMENT to be wider than this
54
if necessary (up to 128bytes), at the expense of using more space.
55
56
Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
57
8 or 16 bytes (if 8byte sizes)
58
Each malloced chunk has a hidden word of overhead holding size
59
and status information, and additional cross-check word
60
if FOOTERS is defined.
61
62
Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
63
8-byte ptrs: 32 bytes (including overhead)
64
65
Even a request for zero bytes (i.e., malloc(0)) returns a
66
pointer to something of the minimum allocatable size.
67
The maximum overhead wastage (i.e., number of extra bytes
68
allocated than were requested in malloc) is less than or equal
69
to the minimum size, except for requests >= mmap_threshold that
70
are serviced via mmap(), where the worst case wastage is about
71
32 bytes plus the remainder from a system page (the minimal
72
mmap unit); typically 4096 or 8192 bytes.
73
74
Security: static-safe; optionally more or less
75
The "security" of malloc refers to the ability of malicious
76
code to accentuate the effects of errors (for example, freeing
77
space that is not currently malloc'ed or overwriting past the
78
ends of chunks) in code that calls malloc. This malloc
79
guarantees not to modify any memory locations below the base of
80
heap, i.e., static variables, even in the presence of usage
81
errors. The routines additionally detect most improper frees
82
and reallocs. All this holds as long as the static bookkeeping
83
for malloc itself is not corrupted by some other means. This
84
is only one aspect of security -- these checks do not, and
85
cannot, detect all possible programming errors.
86
87
If FOOTERS is defined nonzero, then each allocated chunk
88
carries an additional check word to verify that it was malloced
89
from its space. These check words are the same within each
90
execution of a program using malloc, but differ across
91
executions, so externally crafted fake chunks cannot be
92
freed. This improves security by rejecting frees/reallocs that
93
could corrupt heap memory, in addition to the checks preventing
94
writes to statics that are always on. This may further improve
95
security at the expense of time and space overhead. (Note that
96
FOOTERS may also be worth using with MSPACES.)
97
98
By default detected errors cause the program to abort (calling
99
"abort()"). You can override this to instead proceed past
100
errors by defining PROCEED_ON_ERROR. In this case, a bad free
101
has no effect, and a malloc that encounters a bad address
102
caused by user overwrites will ignore the bad address by
103
dropping pointers and indices to all known memory. This may
104
be appropriate for programs that should continue if at all
105
possible in the face of programming errors, although they may
106
run out of memory because dropped memory is never reclaimed.
107
108
If you don't like either of these options, you can define
109
CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
110
else. And if if you are sure that your program using malloc has
111
no errors or vulnerabilities, you can define INSECURE to 1,
112
which might (or might not) provide a small performance improvement.
113
114
It is also possible to limit the maximum total allocatable
115
space, using malloc_set_footprint_limit. This is not
116
designed as a security feature in itself (calls to set limits
117
are not screened or privileged), but may be useful as one
118
aspect of a secure implementation.
119
120
Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
121
When USE_LOCKS is defined, each public call to malloc, free,
122
etc is surrounded with a lock. By default, this uses a plain
123
pthread mutex, win32 critical section, or a spin-lock if if
124
available for the platform and not disabled by setting
125
USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
126
recursive versions are used instead (which are not required for
127
base functionality but may be needed in layered extensions).
128
Using a global lock is not especially fast, and can be a major
129
bottleneck. It is designed only to provide minimal protection
130
in concurrent environments, and to provide a basis for
131
extensions. If you are using malloc in a concurrent program,
132
consider instead using nedmalloc
133
(http://www.nedprod.com/programs/portable/nedmalloc/) or
134
ptmalloc (See http://www.malloc.de), which are derived from
135
versions of this malloc.
136
137
System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
138
This malloc can use unix sbrk or any emulation (invoked using
139
the CALL_MORECORE macro) and/or mmap/munmap or any emulation
140
(invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
141
memory. On most unix systems, it tends to work best if both
142
MORECORE and MMAP are enabled. On Win32, it uses emulations
143
based on VirtualAlloc. It also uses common C library functions
144
like memset.
145
146
Compliance: I believe it is compliant with the Single Unix Specification
147
(See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
148
others as well.
149
150
* Overview of algorithms
151
152
This is not the fastest, most space-conserving, most portable, or
153
most tunable malloc ever written. However it is among the fastest
154
while also being among the most space-conserving, portable and
155
tunable. Consistent balance across these factors results in a good
156
general-purpose allocator for malloc-intensive programs.
157
158
In most ways, this malloc is a best-fit allocator. Generally, it
159
chooses the best-fitting existing chunk for a request, with ties
160
broken in approximately least-recently-used order. (This strategy
161
normally maintains low fragmentation.) However, for requests less
162
than 256bytes, it deviates from best-fit when there is not an
163
exactly fitting available chunk by preferring to use space adjacent
164
to that used for the previous small request, as well as by breaking
165
ties in approximately most-recently-used order. (These enhance
166
locality of series of small allocations.) And for very large requests
167
(>= 256Kb by default), it relies on system memory mapping
168
facilities, if supported. (This helps avoid carrying around and
169
possibly fragmenting memory used only for large chunks.)
170
171
All operations (except malloc_stats and mallinfo) have execution
172
times that are bounded by a constant factor of the number of bits in
173
a size_t, not counting any clearing in calloc or copying in realloc,
174
or actions surrounding MORECORE and MMAP that have times
175
proportional to the number of non-contiguous regions returned by
176
system allocation routines, which is often just 1. In real-time
177
applications, you can optionally suppress segment traversals using
178
NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
179
system allocators return non-contiguous spaces, at the typical
180
expense of carrying around more memory and increased fragmentation.
181
182
The implementation is not very modular and seriously overuses
183
macros. Perhaps someday all C compilers will do as good a job
184
inlining modular code as can now be done by brute-force expansion,
185
but now, enough of them seem not to.
186
187
Some compilers issue a lot of warnings about code that is
188
dead/unreachable only on some platforms, and also about intentional
189
uses of negation on unsigned types. All known cases of each can be
190
ignored.
191
192
For a longer but out of date high-level description, see
193
http://gee.cs.oswego.edu/dl/html/malloc.html
194
195
* MSPACES
196
If MSPACES is defined, then in addition to malloc, free, etc.,
197
this file also defines mspace_malloc, mspace_free, etc. These
198
are versions of malloc routines that take an "mspace" argument
199
obtained using create_mspace, to control all internal bookkeeping.
200
If ONLY_MSPACES is defined, only these versions are compiled.
201
So if you would like to use this allocator for only some allocations,
202
and your system malloc for others, you can compile with
203
ONLY_MSPACES and then do something like...
204
static mspace mymspace = create_mspace(0,0); // for example
205
#define mymalloc(bytes) mspace_malloc(mymspace, bytes)
206
207
(Note: If you only need one instance of an mspace, you can instead
208
use "USE_DL_PREFIX" to relabel the global malloc.)
209
210
You can similarly create thread-local allocators by storing
211
mspaces as thread-locals. For example:
212
static __thread mspace tlms = 0;
213
void* tlmalloc(size_t bytes) {
214
if (tlms == 0) tlms = create_mspace(0, 0);
215
return mspace_malloc(tlms, bytes);
216
}
217
void tlfree(void* mem) { mspace_free(tlms, mem); }
218
219
Unless FOOTERS is defined, each mspace is completely independent.
220
You cannot allocate from one and free to another (although
221
conformance is only weakly checked, so usage errors are not always
222
caught). If FOOTERS is defined, then each chunk carries around a tag
223
indicating its originating mspace, and frees are directed to their
224
originating spaces. Normally, this requires use of locks.
225
226
------------------------- Compile-time options ---------------------------
227
228
Be careful in setting #define values for numerical constants of type
229
size_t. On some systems, literal values are not automatically extended
230
to size_t precision unless they are explicitly casted. You can also
231
use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
232
233
WIN32 default: defined if _WIN32 defined
234
Defining WIN32 sets up defaults for MS environment and compilers.
235
Otherwise defaults are for unix. Beware that there seem to be some
236
cases where this malloc might not be a pure drop-in replacement for
237
Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
238
SetDIBits()) may be due to bugs in some video driver implementations
239
when pixel buffers are malloc()ed, and the region spans more than
240
one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
241
default granularity, pixel buffers may straddle virtual allocation
242
regions more often than when using the Microsoft allocator. You can
243
avoid this by using VirtualAlloc() and VirtualFree() for all pixel
244
buffers rather than using malloc(). If this is not possible,
245
recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
246
in cases where MSC and gcc (cygwin) are known to differ on WIN32,
247
conditions use _MSC_VER to distinguish them.
248
249
DLMALLOC_EXPORT default: extern
250
Defines how public APIs are declared. If you want to export via a
251
Windows DLL, you might define this as
252
#define DLMALLOC_EXPORT extern __declspec(dllexport)
253
If you want a POSIX ELF shared object, you might use
254
#define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
255
256
MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
257
Controls the minimum alignment for malloc'ed chunks. It must be a
258
power of two and at least 8, even on machines for which smaller
259
alignments would suffice. It may be defined as larger than this
260
though. Note however that code and data structures are optimized for
261
the case of 8-byte alignment.
262
263
MSPACES default: 0 (false)
264
If true, compile in support for independent allocation spaces.
265
This is only supported if HAVE_MMAP is true.
266
267
ONLY_MSPACES default: 0 (false)
268
If true, only compile in mspace versions, not regular versions.
269
270
USE_LOCKS default: 0 (false)
271
Causes each call to each public routine to be surrounded with
272
pthread or WIN32 mutex lock/unlock. (If set true, this can be
273
overridden on a per-mspace basis for mspace versions.) If set to a
274
non-zero value other than 1, locks are used, but their
275
implementation is left out, so lock functions must be supplied manually,
276
as described below.
277
278
USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
279
If true, uses custom spin locks for locking. This is currently
280
supported only gcc >= 4.1, older gccs on x86 platforms, and recent
281
MS compilers. Otherwise, posix locks or win32 critical sections are
282
used.
283
284
USE_RECURSIVE_LOCKS default: not defined
285
If defined nonzero, uses recursive (aka reentrant) locks, otherwise
286
uses plain mutexes. This is not required for malloc proper, but may
287
be needed for layered allocators such as nedmalloc.
288
289
LOCK_AT_FORK default: not defined
290
If defined nonzero, performs pthread_atfork upon initialization
291
to initialize child lock while holding parent lock. The implementation
292
assumes that pthread locks (not custom locks) are being used. In other
293
cases, you may need to customize the implementation.
294
295
FOOTERS default: 0
296
If true, provide extra checking and dispatching by placing
297
information in the footers of allocated chunks. This adds
298
space and time overhead.
299
300
INSECURE default: 0
301
If true, omit checks for usage errors and heap space overwrites.
302
303
USE_DL_PREFIX default: NOT defined
304
Causes compiler to prefix all public routines with the string 'dl'.
305
This can be useful when you only want to use this malloc in one part
306
of a program, using your regular system malloc elsewhere.
307
308
MALLOC_INSPECT_ALL default: NOT defined
309
If defined, compiles malloc_inspect_all and mspace_inspect_all, that
310
perform traversal of all heap space. Unless access to these
311
functions is otherwise restricted, you probably do not want to
312
include them in secure implementations.
313
314
ABORT default: defined as abort()
315
Defines how to abort on failed checks. On most systems, a failed
316
check cannot die with an "assert" or even print an informative
317
message, because the underlying print routines in turn call malloc,
318
which will fail again. Generally, the best policy is to simply call
319
abort(). It's not very useful to do more than this because many
320
errors due to overwriting will show up as address faults (null, odd
321
addresses etc) rather than malloc-triggered checks, so will also
322
abort. Also, most compilers know that abort() does not return, so
323
can better optimize code conditionally calling it.
324
325
PROCEED_ON_ERROR default: defined as 0 (false)
326
Controls whether detected bad addresses cause them to bypassed
327
rather than aborting. If set, detected bad arguments to free and
328
realloc are ignored. And all bookkeeping information is zeroed out
329
upon a detected overwrite of freed heap space, thus losing the
330
ability to ever return it from malloc again, but enabling the
331
application to proceed. If PROCEED_ON_ERROR is defined, the
332
static variable malloc_corruption_error_count is compiled in
333
and can be examined to see if errors have occurred. This option
334
generates slower code than the default abort policy.
335
336
DEBUG default: NOT defined
337
The DEBUG setting is mainly intended for people trying to modify
338
this code or diagnose problems when porting to new platforms.
339
However, it may also be able to better isolate user errors than just
340
using runtime checks. The assertions in the check routines spell
341
out in more detail the assumptions and invariants underlying the
342
algorithms. The checking is fairly extensive, and will slow down
343
execution noticeably. Calling malloc_stats or mallinfo with DEBUG
344
set will attempt to check every non-mmapped allocated and free chunk
345
in the course of computing the summaries.
346
347
ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
348
Debugging assertion failures can be nearly impossible if your
349
version of the assert macro causes malloc to be called, which will
350
lead to a cascade of further failures, blowing the runtime stack.
351
ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
352
which will usually make debugging easier.
353
354
MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
355
The action to take before "return 0" when malloc fails to be able to
356
return memory because there is none available.
357
358
HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
359
True if this system supports sbrk or an emulation of it.
360
361
MORECORE default: sbrk
362
The name of the sbrk-style system routine to call to obtain more
363
memory. See below for guidance on writing custom MORECORE
364
functions. The type of the argument to sbrk/MORECORE varies across
365
systems. It cannot be size_t, because it supports negative
366
arguments, so it is normally the signed type of the same width as
367
size_t (sometimes declared as "intptr_t"). It doesn't much matter
368
though. Internally, we only call it with arguments less than half
369
the max value of a size_t, which should work across all reasonable
370
possibilities, although sometimes generating compiler warnings.
371
372
MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
373
If true, take advantage of fact that consecutive calls to MORECORE
374
with positive arguments always return contiguous increasing
375
addresses. This is true of unix sbrk. It does not hurt too much to
376
set it true anyway, since malloc copes with non-contiguities.
377
Setting it false when definitely non-contiguous saves time
378
and possibly wasted space it would take to discover this though.
379
380
MORECORE_CANNOT_TRIM default: NOT defined
381
True if MORECORE cannot release space back to the system when given
382
negative arguments. This is generally necessary only if you are
383
using a hand-crafted MORECORE function that cannot handle negative
384
arguments.
385
386
NO_SEGMENT_TRAVERSAL default: 0
387
If non-zero, suppresses traversals of memory segments
388
returned by either MORECORE or CALL_MMAP. This disables
389
merging of segments that are contiguous, and selectively
390
releasing them to the OS if unused, but bounds execution times.
391
392
HAVE_MMAP default: 1 (true)
393
True if this system supports mmap or an emulation of it. If so, and
394
HAVE_MORECORE is not true, MMAP is used for all system
395
allocation. If set and HAVE_MORECORE is true as well, MMAP is
396
primarily used to directly allocate very large blocks. It is also
397
used as a backup strategy in cases where MORECORE fails to provide
398
space from system. Note: A single call to MUNMAP is assumed to be
399
able to unmap memory that may have be allocated using multiple calls
400
to MMAP, so long as they are adjacent.
401
402
HAVE_MREMAP default: 1 on linux, else 0
403
If true realloc() uses mremap() to re-allocate large blocks and
404
extend or shrink allocation spaces.
405
406
MMAP_CLEARS default: 1 except on WINCE.
407
True if mmap clears memory so calloc doesn't need to. This is true
408
for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
409
410
USE_BUILTIN_FFS default: 0 (i.e., not used)
411
Causes malloc to use the builtin ffs() function to compute indices.
412
Some compilers may recognize and intrinsify ffs to be faster than the
413
supplied C version. Also, the case of x86 using gcc is special-cased
414
to an asm instruction, so is already as fast as it can be, and so
415
this setting has no effect. Similarly for Win32 under recent MS compilers.
416
(On most x86s, the asm version is only slightly faster than the C version.)
417
418
malloc_getpagesize default: derive from system includes, or 4096.
419
The system page size. To the extent possible, this malloc manages
420
memory from the system in page-size units. This may be (and
421
usually is) a function rather than a constant. This is ignored
422
if WIN32, where page size is determined using getSystemInfo during
423
initialization.
424
425
USE_DEV_RANDOM default: 0 (i.e., not used)
426
Causes malloc to use /dev/random to initialize secure magic seed for
427
stamping footers. Otherwise, the current time is used.
428
429
NO_MALLINFO default: 0
430
If defined, don't compile "mallinfo". This can be a simple way
431
of dealing with mismatches between system declarations and
432
those in this file.
433
434
MALLINFO_FIELD_TYPE default: size_t
435
The type of the fields in the mallinfo struct. This was originally
436
defined as "int" in SVID etc, but is more usefully defined as
437
size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
438
439
NO_MALLOC_STATS default: 0
440
If defined, don't compile "malloc_stats". This avoids calls to
441
fprintf and bringing in stdio dependencies you might not want.
442
443
REALLOC_ZERO_BYTES_FREES default: not defined
444
This should be set if a call to realloc with zero bytes should
445
be the same as a call to free. Some people think it should. Otherwise,
446
since this malloc returns a unique pointer for malloc(0), so does
447
realloc(p, 0).
448
449
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
450
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
451
LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
452
Define these if your system does not have these header files.
453
You might need to manually insert some of the declarations they provide.
454
455
DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
456
system_info.dwAllocationGranularity in WIN32,
457
otherwise 64K.
458
Also settable using mallopt(M_GRANULARITY, x)
459
The unit for allocating and deallocating memory from the system. On
460
most systems with contiguous MORECORE, there is no reason to
461
make this more than a page. However, systems with MMAP tend to
462
either require or encourage larger granularities. You can increase
463
this value to prevent system allocation functions to be called so
464
often, especially if they are slow. The value must be at least one
465
page and must be a power of two. Setting to 0 causes initialization
466
to either page size or win32 region size. (Note: In previous
467
versions of malloc, the equivalent of this option was called
468
"TOP_PAD")
469
470
DEFAULT_TRIM_THRESHOLD default: 2MB
471
Also settable using mallopt(M_TRIM_THRESHOLD, x)
472
The maximum amount of unused top-most memory to keep before
473
releasing via malloc_trim in free(). Automatic trimming is mainly
474
useful in long-lived programs using contiguous MORECORE. Because
475
trimming via sbrk can be slow on some systems, and can sometimes be
476
wasteful (in cases where programs immediately afterward allocate
477
more large chunks) the value should be high enough so that your
478
overall system performance would improve by releasing this much
479
memory. As a rough guide, you might set to a value close to the
480
average size of a process (program) running on your system.
481
Releasing this much memory would allow such a process to run in
482
memory. Generally, it is worth tuning trim thresholds when a
483
program undergoes phases where several large chunks are allocated
484
and released in ways that can reuse each other's storage, perhaps
485
mixed with phases where there are no such chunks at all. The trim
486
value must be greater than page size to have any useful effect. To
487
disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
488
some people use of mallocing a huge space and then freeing it at
489
program startup, in an attempt to reserve system memory, doesn't
490
have the intended effect under automatic trimming, since that memory
491
will immediately be returned to the system.
492
493
DEFAULT_MMAP_THRESHOLD default: 256K
494
Also settable using mallopt(M_MMAP_THRESHOLD, x)
495
The request size threshold for using MMAP to directly service a
496
request. Requests of at least this size that cannot be allocated
497
using already-existing space will be serviced via mmap. (If enough
498
normal freed space already exists it is used instead.) Using mmap
499
segregates relatively large chunks of memory so that they can be
500
individually obtained and released from the host system. A request
501
serviced through mmap is never reused by any other request (at least
502
not directly; the system may just so happen to remap successive
503
requests to the same locations). Segregating space in this way has
504
the benefits that: Mmapped space can always be individually released
505
back to the system, which helps keep the system level memory demands
506
of a long-lived program low. Also, mapped memory doesn't become
507
`locked' between other chunks, as can happen with normally allocated
508
chunks, which means that even trimming via malloc_trim would not
509
release them. However, it has the disadvantage that the space
510
cannot be reclaimed, consolidated, and then used to service later
511
requests, as happens with normal chunks. The advantages of mmap
512
nearly always outweigh disadvantages for "large" chunks, but the
513
value of "large" may vary across systems. The default is an
514
empirically derived value that works well in most systems. You can
515
disable mmap by setting to MAX_SIZE_T.
516
517
MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
518
The number of consolidated frees between checks to release
519
unused segments when freeing. When using non-contiguous segments,
520
especially with multiple mspaces, checking only for topmost space
521
doesn't always suffice to trigger trimming. To compensate for this,
522
free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
523
current number of segments, if greater) try to release unused
524
segments to the OS when freeing chunks that result in
525
consolidation. The best value for this parameter is a compromise
526
between slowing down frees with relatively costly checks that
527
rarely trigger versus holding on to unused memory. To effectively
528
disable, set to MAX_SIZE_T. This may lead to a very slight speed
529
improvement at the expense of carrying around more memory.
530
*/
531
532
/* Version identifier to allow people to support multiple versions */
533
#ifndef DLMALLOC_VERSION
534
#define DLMALLOC_VERSION 20806
535
#endif /* DLMALLOC_VERSION */
536
537
#ifndef DLMALLOC_EXPORT
538
#define DLMALLOC_EXPORT extern
539
#endif
540
541
#ifndef WIN32
542
#ifdef _WIN32
543
#define WIN32 1
544
#endif /* _WIN32 */
545
#ifdef _WIN32_WCE
546
#define LACKS_FCNTL_H
547
#define WIN32 1
548
#endif /* _WIN32_WCE */
549
#endif /* WIN32 */
550
#ifdef WIN32
551
#define WIN32_LEAN_AND_MEAN
552
#include <windows.h>
553
#include <tchar.h>
554
#define HAVE_MMAP 1
555
#define HAVE_MORECORE 0
556
#define LACKS_UNISTD_H
557
#define LACKS_SYS_PARAM_H
558
#define LACKS_SYS_MMAN_H
559
#define LACKS_STRING_H
560
#define LACKS_STRINGS_H
561
#define LACKS_SYS_TYPES_H
562
#define LACKS_ERRNO_H
563
#define LACKS_SCHED_H
564
#ifndef MALLOC_FAILURE_ACTION
565
#define MALLOC_FAILURE_ACTION
566
#endif /* MALLOC_FAILURE_ACTION */
567
#ifndef MMAP_CLEARS
568
#ifdef _WIN32_WCE /* WINCE reportedly does not clear */
569
#define MMAP_CLEARS 0
570
#else
571
#define MMAP_CLEARS 1
572
#endif /* _WIN32_WCE */
573
#endif /*MMAP_CLEARS */
574
#endif /* WIN32 */
575
576
#if defined(DARWIN) || defined(_DARWIN)
577
/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
578
#ifndef HAVE_MORECORE
579
#define HAVE_MORECORE 0
580
#define HAVE_MMAP 1
581
/* OSX allocators provide 16 byte alignment */
582
#ifndef MALLOC_ALIGNMENT
583
#define MALLOC_ALIGNMENT ((size_t)16U)
584
#endif
585
#endif /* HAVE_MORECORE */
586
#endif /* DARWIN */
587
588
#ifndef LACKS_SYS_TYPES_H
589
#include <sys/types.h> /* For size_t */
590
#endif /* LACKS_SYS_TYPES_H */
591
592
/* The maximum possible size_t value has all bits set */
593
#define MAX_SIZE_T (~(size_t)0)
594
595
#ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
596
#define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
597
(defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
598
#endif /* USE_LOCKS */
599
600
#if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
601
#if ((defined(__GNUC__) && \
602
((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
603
defined(__i386__) || defined(__x86_64__))) || \
604
(defined(_MSC_VER) && _MSC_VER>=1310))
605
#ifndef USE_SPIN_LOCKS
606
#define USE_SPIN_LOCKS 1
607
#endif /* USE_SPIN_LOCKS */
608
#elif USE_SPIN_LOCKS
609
#error "USE_SPIN_LOCKS defined without implementation"
610
#endif /* ... locks available... */
611
#elif !defined(USE_SPIN_LOCKS)
612
#define USE_SPIN_LOCKS 0
613
#endif /* USE_LOCKS */
614
615
#ifndef ONLY_MSPACES
616
#define ONLY_MSPACES 0
617
#endif /* ONLY_MSPACES */
618
#ifndef MSPACES
619
#if ONLY_MSPACES
620
#define MSPACES 1
621
#else /* ONLY_MSPACES */
622
#define MSPACES 0
623
#endif /* ONLY_MSPACES */
624
#endif /* MSPACES */
625
#ifndef MALLOC_ALIGNMENT
626
#define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
627
#endif /* MALLOC_ALIGNMENT */
628
#ifndef FOOTERS
629
#define FOOTERS 0
630
#endif /* FOOTERS */
631
#ifndef ABORT
632
#define ABORT abort()
633
#endif /* ABORT */
634
#ifndef ABORT_ON_ASSERT_FAILURE
635
#define ABORT_ON_ASSERT_FAILURE 1
636
#endif /* ABORT_ON_ASSERT_FAILURE */
637
#ifndef PROCEED_ON_ERROR
638
#define PROCEED_ON_ERROR 0
639
#endif /* PROCEED_ON_ERROR */
640
641
#ifndef INSECURE
642
#define INSECURE 0
643
#endif /* INSECURE */
644
#ifndef MALLOC_INSPECT_ALL
645
#define MALLOC_INSPECT_ALL 0
646
#endif /* MALLOC_INSPECT_ALL */
647
#ifndef HAVE_MMAP
648
#define HAVE_MMAP 1
649
#endif /* HAVE_MMAP */
650
#ifndef MMAP_CLEARS
651
#define MMAP_CLEARS 1
652
#endif /* MMAP_CLEARS */
653
#ifndef HAVE_MREMAP
654
#ifdef linux
655
#define HAVE_MREMAP 1
656
#define _GNU_SOURCE /* Turns on mremap() definition */
657
#else /* linux */
658
#define HAVE_MREMAP 0
659
#endif /* linux */
660
#endif /* HAVE_MREMAP */
661
#ifndef MALLOC_FAILURE_ACTION
662
#define MALLOC_FAILURE_ACTION errno = ENOMEM;
663
#endif /* MALLOC_FAILURE_ACTION */
664
#ifndef HAVE_MORECORE
665
#if ONLY_MSPACES
666
#define HAVE_MORECORE 0
667
#else /* ONLY_MSPACES */
668
#define HAVE_MORECORE 1
669
#endif /* ONLY_MSPACES */
670
#endif /* HAVE_MORECORE */
671
#if !HAVE_MORECORE
672
#define MORECORE_CONTIGUOUS 0
673
#else /* !HAVE_MORECORE */
674
#define MORECORE_DEFAULT sbrk
675
#ifndef MORECORE_CONTIGUOUS
676
#define MORECORE_CONTIGUOUS 1
677
#endif /* MORECORE_CONTIGUOUS */
678
#endif /* HAVE_MORECORE */
679
#ifndef DEFAULT_GRANULARITY
680
#if (MORECORE_CONTIGUOUS || defined(WIN32))
681
#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
682
#else /* MORECORE_CONTIGUOUS */
683
#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
684
#endif /* MORECORE_CONTIGUOUS */
685
#endif /* DEFAULT_GRANULARITY */
686
#ifndef DEFAULT_TRIM_THRESHOLD
687
#ifndef MORECORE_CANNOT_TRIM
688
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
689
#else /* MORECORE_CANNOT_TRIM */
690
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
691
#endif /* MORECORE_CANNOT_TRIM */
692
#endif /* DEFAULT_TRIM_THRESHOLD */
693
#ifndef DEFAULT_MMAP_THRESHOLD
694
#if HAVE_MMAP
695
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
696
#else /* HAVE_MMAP */
697
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
698
#endif /* HAVE_MMAP */
699
#endif /* DEFAULT_MMAP_THRESHOLD */
700
#ifndef MAX_RELEASE_CHECK_RATE
701
#if HAVE_MMAP
702
#define MAX_RELEASE_CHECK_RATE 4095
703
#else
704
#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
705
#endif /* HAVE_MMAP */
706
#endif /* MAX_RELEASE_CHECK_RATE */
707
#ifndef USE_BUILTIN_FFS
708
#define USE_BUILTIN_FFS 0
709
#endif /* USE_BUILTIN_FFS */
710
#ifndef USE_DEV_RANDOM
711
#define USE_DEV_RANDOM 0
712
#endif /* USE_DEV_RANDOM */
713
#ifndef NO_MALLINFO
714
#define NO_MALLINFO 0
715
#endif /* NO_MALLINFO */
716
#ifndef MALLINFO_FIELD_TYPE
717
#define MALLINFO_FIELD_TYPE size_t
718
#endif /* MALLINFO_FIELD_TYPE */
719
#ifndef NO_MALLOC_STATS
720
#define NO_MALLOC_STATS 0
721
#endif /* NO_MALLOC_STATS */
722
#ifndef NO_SEGMENT_TRAVERSAL
723
#define NO_SEGMENT_TRAVERSAL 0
724
#endif /* NO_SEGMENT_TRAVERSAL */
725
726
/*
727
mallopt tuning options. SVID/XPG defines four standard parameter
728
numbers for mallopt, normally defined in malloc.h. None of these
729
are used in this malloc, so setting them has no effect. But this
730
malloc does support the following options.
731
*/
732
733
#define M_TRIM_THRESHOLD (-1)
734
#define M_GRANULARITY (-2)
735
#define M_MMAP_THRESHOLD (-3)
736
737
/* ------------------------ Mallinfo declarations ------------------------ */
738
739
#if !NO_MALLINFO
740
/*
741
This version of malloc supports the standard SVID/XPG mallinfo
742
routine that returns a struct containing usage properties and
743
statistics. It should work on any system that has a
744
/usr/include/malloc.h defining struct mallinfo. The main
745
declaration needed is the mallinfo struct that is returned (by-copy)
746
by mallinfo(). The malloinfo struct contains a bunch of fields that
747
are not even meaningful in this version of malloc. These fields are
748
are instead filled by mallinfo() with other numbers that might be of
749
interest.
750
751
HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
752
/usr/include/malloc.h file that includes a declaration of struct
753
mallinfo. If so, it is included; else a compliant version is
754
declared below. These must be precisely the same for mallinfo() to
755
work. The original SVID version of this struct, defined on most
756
systems with mallinfo, declares all fields as ints. But some others
757
define as unsigned long. If your system defines the fields using a
758
type of different width than listed here, you MUST #include your
759
system version and #define HAVE_USR_INCLUDE_MALLOC_H.
760
*/
761
762
/* #define HAVE_USR_INCLUDE_MALLOC_H */
763
764
#ifdef HAVE_USR_INCLUDE_MALLOC_H
765
#include "/usr/include/malloc.h"
766
#else /* HAVE_USR_INCLUDE_MALLOC_H */
767
#ifndef STRUCT_MALLINFO_DECLARED
768
/* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
769
#define _STRUCT_MALLINFO
770
#define STRUCT_MALLINFO_DECLARED 1
771
struct mallinfo {
772
MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
773
MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
774
MALLINFO_FIELD_TYPE smblks; /* always 0 */
775
MALLINFO_FIELD_TYPE hblks; /* always 0 */
776
MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
777
MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
778
MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
779
MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
780
MALLINFO_FIELD_TYPE fordblks; /* total free space */
781
MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
782
};
783
#endif /* STRUCT_MALLINFO_DECLARED */
784
#endif /* HAVE_USR_INCLUDE_MALLOC_H */
785
#endif /* NO_MALLINFO */
786
787
/*
788
Try to persuade compilers to inline. The most critical functions for
789
inlining are defined as macros, so these aren't used for them.
790
*/
791
792
#ifndef FORCEINLINE
793
#if defined(__GNUC__)
794
#define FORCEINLINE __inline __attribute__ ((always_inline))
795
#elif defined(_MSC_VER)
796
#define FORCEINLINE __forceinline
797
#endif
798
#endif
799
#ifndef NOINLINE
800
#if defined(__GNUC__)
801
#define NOINLINE __attribute__ ((noinline))
802
#elif defined(_MSC_VER)
803
#define NOINLINE __declspec(noinline)
804
#else
805
#define NOINLINE
806
#endif
807
#endif
808
809
#ifdef __cplusplus
810
extern "C" {
811
#ifndef FORCEINLINE
812
#define FORCEINLINE inline
813
#endif
814
#endif /* __cplusplus */
815
#ifndef FORCEINLINE
816
#define FORCEINLINE
817
#endif
818
819
#if !ONLY_MSPACES
820
821
/* ------------------- Declarations of public routines ------------------- */
822
823
#ifndef USE_DL_PREFIX
824
#define dlcalloc calloc
825
#define dlfree free
826
#define dlmalloc malloc
827
#define dlmemalign memalign
828
#define dlposix_memalign posix_memalign
829
#define dlrealloc realloc
830
#define dlrealloc_in_place realloc_in_place
831
#define dlvalloc valloc
832
#define dlpvalloc pvalloc
833
#define dlmallinfo mallinfo
834
#define dlmallopt mallopt
835
#define dlmalloc_trim malloc_trim
836
#define dlmalloc_stats malloc_stats
837
#define dlmalloc_usable_size malloc_usable_size
838
#define dlmalloc_footprint malloc_footprint
839
#define dlmalloc_max_footprint malloc_max_footprint
840
#define dlmalloc_footprint_limit malloc_footprint_limit
841
#define dlmalloc_set_footprint_limit malloc_set_footprint_limit
842
#define dlmalloc_inspect_all malloc_inspect_all
843
#define dlindependent_calloc independent_calloc
844
#define dlindependent_comalloc independent_comalloc
845
#define dlbulk_free bulk_free
846
#endif /* USE_DL_PREFIX */
847
848
/*
849
malloc(size_t n)
850
Returns a pointer to a newly allocated chunk of at least n bytes, or
851
null if no space is available, in which case errno is set to ENOMEM
852
on ANSI C systems.
853
854
If n is zero, malloc returns a minimum-sized chunk. (The minimum
855
size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
856
systems.) Note that size_t is an unsigned type, so calls with
857
arguments that would be negative if signed are interpreted as
858
requests for huge amounts of space, which will often fail. The
859
maximum supported value of n differs across systems, but is in all
860
cases less than the maximum representable value of a size_t.
861
*/
862
DLMALLOC_EXPORT void* dlmalloc(size_t);
863
864
/*
865
free(void* p)
866
Releases the chunk of memory pointed to by p, that had been previously
867
allocated using malloc or a related routine such as realloc.
868
It has no effect if p is null. If p was not malloced or already
869
freed, free(p) will by default cause the current program to abort.
870
*/
871
DLMALLOC_EXPORT void dlfree(void*);
872
873
/*
874
calloc(size_t n_elements, size_t element_size);
875
Returns a pointer to n_elements * element_size bytes, with all locations
876
set to zero.
877
*/
878
DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
879
880
/*
881
realloc(void* p, size_t n)
882
Returns a pointer to a chunk of size n that contains the same data
883
as does chunk p up to the minimum of (n, p's size) bytes, or null
884
if no space is available.
885
886
The returned pointer may or may not be the same as p. The algorithm
887
prefers extending p in most cases when possible, otherwise it
888
employs the equivalent of a malloc-copy-free sequence.
889
890
If p is null, realloc is equivalent to malloc.
891
892
If space is not available, realloc returns null, errno is set (if on
893
ANSI) and p is NOT freed.
894
895
if n is for fewer bytes than already held by p, the newly unused
896
space is lopped off and freed if possible. realloc with a size
897
argument of zero (re)allocates a minimum-sized chunk.
898
899
The old unix realloc convention of allowing the last-free'd chunk
900
to be used as an argument to realloc is not supported.
901
*/
902
DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
903
904
/*
905
realloc_in_place(void* p, size_t n)
906
Resizes the space allocated for p to size n, only if this can be
907
done without moving p (i.e., only if there is adjacent space
908
available if n is greater than p's current allocated size, or n is
909
less than or equal to p's size). This may be used instead of plain
910
realloc if an alternative allocation strategy is needed upon failure
911
to expand space; for example, reallocation of a buffer that must be
912
memory-aligned or cleared. You can use realloc_in_place to trigger
913
these alternatives only when needed.
914
915
Returns p if successful; otherwise null.
916
*/
917
DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
918
919
/*
920
memalign(size_t alignment, size_t n);
921
Returns a pointer to a newly allocated chunk of n bytes, aligned
922
in accord with the alignment argument.
923
924
The alignment argument should be a power of two. If the argument is
925
not a power of two, the nearest greater power is used.
926
8-byte alignment is guaranteed by normal malloc calls, so don't
927
bother calling memalign with an argument of 8 or less.
928
929
Overreliance on memalign is a sure way to fragment space.
930
*/
931
DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
932
933
/*
934
int posix_memalign(void** pp, size_t alignment, size_t n);
935
Allocates a chunk of n bytes, aligned in accord with the alignment
936
argument. Differs from memalign only in that it (1) assigns the
937
allocated memory to *pp rather than returning it, (2) fails and
938
returns EINVAL if the alignment is not a power of two (3) fails and
939
returns ENOMEM if memory cannot be allocated.
940
*/
941
DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
942
943
/*
944
valloc(size_t n);
945
Equivalent to memalign(pagesize, n), where pagesize is the page
946
size of the system. If the pagesize is unknown, 4096 is used.
947
*/
948
DLMALLOC_EXPORT void* dlvalloc(size_t);
949
950
/*
951
mallopt(int parameter_number, int parameter_value)
952
Sets tunable parameters The format is to provide a
953
(parameter-number, parameter-value) pair. mallopt then sets the
954
corresponding parameter to the argument value if it can (i.e., so
955
long as the value is meaningful), and returns 1 if successful else
956
0. To workaround the fact that mallopt is specified to use int,
957
not size_t parameters, the value -1 is specially treated as the
958
maximum unsigned size_t value.
959
960
SVID/XPG/ANSI defines four standard param numbers for mallopt,
961
normally defined in malloc.h. None of these are use in this malloc,
962
so setting them has no effect. But this malloc also supports other
963
options in mallopt. See below for details. Briefly, supported
964
parameters are as follows (listed defaults are for "typical"
965
configurations).
966
967
Symbol param # default allowed param values
968
M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
969
M_GRANULARITY -2 page size any power of 2 >= page size
970
M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
971
*/
972
DLMALLOC_EXPORT int dlmallopt(int, int);
973
974
/*
975
malloc_footprint();
976
Returns the number of bytes obtained from the system. The total
977
number of bytes allocated by malloc, realloc etc., is less than this
978
value. Unlike mallinfo, this function returns only a precomputed
979
result, so can be called frequently to monitor memory consumption.
980
Even if locks are otherwise defined, this function does not use them,
981
so results might not be up to date.
982
*/
983
DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
984
985
/*
986
malloc_max_footprint();
987
Returns the maximum number of bytes obtained from the system. This
988
value will be greater than current footprint if deallocated space
989
has been reclaimed by the system. The peak number of bytes allocated
990
by malloc, realloc etc., is less than this value. Unlike mallinfo,
991
this function returns only a precomputed result, so can be called
992
frequently to monitor memory consumption. Even if locks are
993
otherwise defined, this function does not use them, so results might
994
not be up to date.
995
*/
996
DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
997
998
/*
999
malloc_footprint_limit();
1000
Returns the number of bytes that the heap is allowed to obtain from
1001
the system, returning the last value returned by
1002
malloc_set_footprint_limit, or the maximum size_t value if
1003
never set. The returned value reflects a permission. There is no
1004
guarantee that this number of bytes can actually be obtained from
1005
the system.
1006
*/
1007
DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
1008
1009
/*
1010
malloc_set_footprint_limit();
1011
Sets the maximum number of bytes to obtain from the system, causing
1012
failure returns from malloc and related functions upon attempts to
1013
exceed this value. The argument value may be subject to page
1014
rounding to an enforceable limit; this actual value is returned.
1015
Using an argument of the maximum possible size_t effectively
1016
disables checks. If the argument is less than or equal to the
1017
current malloc_footprint, then all future allocations that require
1018
additional system memory will fail. However, invocation cannot
1019
retroactively deallocate existing used memory.
1020
*/
1021
DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1022
1023
#if MALLOC_INSPECT_ALL
1024
/*
1025
malloc_inspect_all(void(*handler)(void *start,
1026
void *end,
1027
size_t used_bytes,
1028
void* callback_arg),
1029
void* arg);
1030
Traverses the heap and calls the given handler for each managed
1031
region, skipping all bytes that are (or may be) used for bookkeeping
1032
purposes. Traversal does not include include chunks that have been
1033
directly memory mapped. Each reported region begins at the start
1034
address, and continues up to but not including the end address. The
1035
first used_bytes of the region contain allocated data. If
1036
used_bytes is zero, the region is unallocated. The handler is
1037
invoked with the given callback argument. If locks are defined, they
1038
are held during the entire traversal. It is a bad idea to invoke
1039
other malloc functions from within the handler.
1040
1041
For example, to count the number of in-use chunks with size greater
1042
than 1000, you could write:
1043
static int count = 0;
1044
void count_chunks(void* start, void* end, size_t used, void* arg) {
1045
if (used >= 1000) ++count;
1046
}
1047
then:
1048
malloc_inspect_all(count_chunks, NULL);
1049
1050
malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1051
*/
1052
DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1053
void* arg);
1054
1055
#endif /* MALLOC_INSPECT_ALL */
1056
1057
#if !NO_MALLINFO
1058
/*
1059
mallinfo()
1060
Returns (by copy) a struct containing various summary statistics:
1061
1062
arena: current total non-mmapped bytes allocated from system
1063
ordblks: the number of free chunks
1064
smblks: always zero.
1065
hblks: current number of mmapped regions
1066
hblkhd: total bytes held in mmapped regions
1067
usmblks: the maximum total allocated space. This will be greater
1068
than current total if trimming has occurred.
1069
fsmblks: always zero
1070
uordblks: current total allocated space (normal or mmapped)
1071
fordblks: total free space
1072
keepcost: the maximum number of bytes that could ideally be released
1073
back to system via malloc_trim. ("ideally" means that
1074
it ignores page restrictions etc.)
1075
1076
Because these fields are ints, but internal bookkeeping may
1077
be kept as longs, the reported values may wrap around zero and
1078
thus be inaccurate.
1079
*/
1080
DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1081
#endif /* NO_MALLINFO */
1082
1083
/*
1084
independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1085
1086
independent_calloc is similar to calloc, but instead of returning a
1087
single cleared space, it returns an array of pointers to n_elements
1088
independent elements that can hold contents of size elem_size, each
1089
of which starts out cleared, and can be independently freed,
1090
realloc'ed etc. The elements are guaranteed to be adjacently
1091
allocated (this is not guaranteed to occur with multiple callocs or
1092
mallocs), which may also improve cache locality in some
1093
applications.
1094
1095
The "chunks" argument is optional (i.e., may be null, which is
1096
probably the most typical usage). If it is null, the returned array
1097
is itself dynamically allocated and should also be freed when it is
1098
no longer needed. Otherwise, the chunks array must be of at least
1099
n_elements in length. It is filled in with the pointers to the
1100
chunks.
1101
1102
In either case, independent_calloc returns this pointer array, or
1103
null if the allocation failed. If n_elements is zero and "chunks"
1104
is null, it returns a chunk representing an array with zero elements
1105
(which should be freed if not wanted).
1106
1107
Each element must be freed when it is no longer needed. This can be
1108
done all at once using bulk_free.
1109
1110
independent_calloc simplifies and speeds up implementations of many
1111
kinds of pools. It may also be useful when constructing large data
1112
structures that initially have a fixed number of fixed-sized nodes,
1113
but the number is not known at compile time, and some of the nodes
1114
may later need to be freed. For example:
1115
1116
struct Node { int item; struct Node* next; };
1117
1118
struct Node* build_list() {
1119
struct Node** pool;
1120
int n = read_number_of_nodes_needed();
1121
if (n <= 0) return 0;
1122
pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1123
if (pool == 0) die();
1124
// organize into a linked list...
1125
struct Node* first = pool[0];
1126
for (i = 0; i < n-1; ++i)
1127
pool[i]->next = pool[i+1];
1128
free(pool); // Can now free the array (or not, if it is needed later)
1129
return first;
1130
}
1131
*/
1132
DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1133
1134
/*
1135
independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1136
1137
independent_comalloc allocates, all at once, a set of n_elements
1138
chunks with sizes indicated in the "sizes" array. It returns
1139
an array of pointers to these elements, each of which can be
1140
independently freed, realloc'ed etc. The elements are guaranteed to
1141
be adjacently allocated (this is not guaranteed to occur with
1142
multiple callocs or mallocs), which may also improve cache locality
1143
in some applications.
1144
1145
The "chunks" argument is optional (i.e., may be null). If it is null
1146
the returned array is itself dynamically allocated and should also
1147
be freed when it is no longer needed. Otherwise, the chunks array
1148
must be of at least n_elements in length. It is filled in with the
1149
pointers to the chunks.
1150
1151
In either case, independent_comalloc returns this pointer array, or
1152
null if the allocation failed. If n_elements is zero and chunks is
1153
null, it returns a chunk representing an array with zero elements
1154
(which should be freed if not wanted).
1155
1156
Each element must be freed when it is no longer needed. This can be
1157
done all at once using bulk_free.
1158
1159
independent_comallac differs from independent_calloc in that each
1160
element may have a different size, and also that it does not
1161
automatically clear elements.
1162
1163
independent_comalloc can be used to speed up allocation in cases
1164
where several structs or objects must always be allocated at the
1165
same time. For example:
1166
1167
struct Head { ... }
1168
struct Foot { ... }
1169
1170
void send_message(char* msg) {
1171
int msglen = strlen(msg);
1172
size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1173
void* chunks[3];
1174
if (independent_comalloc(3, sizes, chunks) == 0)
1175
die();
1176
struct Head* head = (struct Head*)(chunks[0]);
1177
char* body = (char*)(chunks[1]);
1178
struct Foot* foot = (struct Foot*)(chunks[2]);
1179
// ...
1180
}
1181
1182
In general though, independent_comalloc is worth using only for
1183
larger values of n_elements. For small values, you probably won't
1184
detect enough difference from series of malloc calls to bother.
1185
1186
Overuse of independent_comalloc can increase overall memory usage,
1187
since it cannot reuse existing noncontiguous small chunks that
1188
might be available for some of the elements.
1189
*/
1190
DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1191
1192
/*
1193
bulk_free(void* array[], size_t n_elements)
1194
Frees and clears (sets to null) each non-null pointer in the given
1195
array. This is likely to be faster than freeing them one-by-one.
1196
If footers are used, pointers that have been allocated in different
1197
mspaces are not freed or cleared, and the count of all such pointers
1198
is returned. For large arrays of pointers with poor locality, it
1199
may be worthwhile to sort this array before calling bulk_free.
1200
*/
1201
DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements);
1202
1203
/*
1204
pvalloc(size_t n);
1205
Equivalent to valloc(minimum-page-that-holds(n)), that is,
1206
round up n to nearest pagesize.
1207
*/
1208
DLMALLOC_EXPORT void* dlpvalloc(size_t);
1209
1210
/*
1211
malloc_trim(size_t pad);
1212
1213
If possible, gives memory back to the system (via negative arguments
1214
to sbrk) if there is unused memory at the `high' end of the malloc
1215
pool or in unused MMAP segments. You can call this after freeing
1216
large blocks of memory to potentially reduce the system-level memory
1217
requirements of a program. However, it cannot guarantee to reduce
1218
memory. Under some allocation patterns, some large free blocks of
1219
memory will be locked between two used chunks, so they cannot be
1220
given back to the system.
1221
1222
The `pad' argument to malloc_trim represents the amount of free
1223
trailing space to leave untrimmed. If this argument is zero, only
1224
the minimum amount of memory to maintain internal data structures
1225
will be left. Non-zero arguments can be supplied to maintain enough
1226
trailing space to service future expected allocations without having
1227
to re-obtain memory from the system.
1228
1229
Malloc_trim returns 1 if it actually released any memory, else 0.
1230
*/
1231
DLMALLOC_EXPORT int dlmalloc_trim(size_t);
1232
1233
/*
1234
malloc_stats();
1235
Prints on stderr the amount of space obtained from the system (both
1236
via sbrk and mmap), the maximum amount (which may be more than
1237
current if malloc_trim and/or munmap got called), and the current
1238
number of bytes allocated via malloc (or realloc, etc) but not yet
1239
freed. Note that this is the number of bytes allocated, not the
1240
number requested. It will be larger than the number requested
1241
because of alignment and bookkeeping overhead. Because it includes
1242
alignment wastage as being in use, this figure may be greater than
1243
zero even when no user-level chunks are allocated.
1244
1245
The reported current and maximum system memory can be inaccurate if
1246
a program makes other calls to system memory allocation functions
1247
(normally sbrk) outside of malloc.
1248
1249
malloc_stats prints only the most commonly interesting statistics.
1250
More information can be obtained by calling mallinfo.
1251
*/
1252
DLMALLOC_EXPORT void dlmalloc_stats(void);
1253
1254
/*
1255
malloc_usable_size(void* p);
1256
1257
Returns the number of bytes you can actually use in
1258
an allocated chunk, which may be more than you requested (although
1259
often not) due to alignment and minimum size constraints.
1260
You can use this many bytes without worrying about
1261
overwriting other allocated objects. This is not a particularly great
1262
programming practice. malloc_usable_size can be more useful in
1263
debugging and assertions, for example:
1264
1265
p = malloc(n);
1266
assert(malloc_usable_size(p) >= 256);
1267
*/
1268
size_t dlmalloc_usable_size(void*);
1269
1270
#endif /* ONLY_MSPACES */
1271
1272
#if MSPACES
1273
1274
/*
1275
mspace is an opaque type representing an independent
1276
region of space that supports mspace_malloc, etc.
1277
*/
1278
typedef void* mspace;
1279
1280
/*
1281
create_mspace creates and returns a new independent space with the
1282
given initial capacity, or, if 0, the default granularity size. It
1283
returns null if there is no system memory available to create the
1284
space. If argument locked is non-zero, the space uses a separate
1285
lock to control access. The capacity of the space will grow
1286
dynamically as needed to service mspace_malloc requests. You can
1287
control the sizes of incremental increases of this space by
1288
compiling with a different DEFAULT_GRANULARITY or dynamically
1289
setting with mallopt(M_GRANULARITY, value).
1290
*/
1291
DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1292
1293
/*
1294
destroy_mspace destroys the given space, and attempts to return all
1295
of its memory back to the system, returning the total number of
1296
bytes freed. After destruction, the results of access to all memory
1297
used by the space become undefined.
1298
*/
1299
DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1300
1301
/*
1302
create_mspace_with_base uses the memory supplied as the initial base
1303
of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1304
space is used for bookkeeping, so the capacity must be at least this
1305
large. (Otherwise 0 is returned.) When this initial space is
1306
exhausted, additional memory will be obtained from the system.
1307
Destroying this space will deallocate all additionally allocated
1308
space (if possible) but not the initial base.
1309
*/
1310
DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1311
1312
/*
1313
mspace_track_large_chunks controls whether requests for large chunks
1314
are allocated in their own untracked mmapped regions, separate from
1315
others in this mspace. By default large chunks are not tracked,
1316
which reduces fragmentation. However, such chunks are not
1317
necessarily released to the system upon destroy_mspace. Enabling
1318
tracking by setting to true may increase fragmentation, but avoids
1319
leakage when relying on destroy_mspace to release all memory
1320
allocated using this space. The function returns the previous
1321
setting.
1322
*/
1323
DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1324
1325
1326
/*
1327
mspace_malloc behaves as malloc, but operates within
1328
the given space.
1329
*/
1330
DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1331
1332
/*
1333
mspace_free behaves as free, but operates within
1334
the given space.
1335
1336
If compiled with FOOTERS==1, mspace_free is not actually needed.
1337
free may be called instead of mspace_free because freed chunks from
1338
any space are handled by their originating spaces.
1339
*/
1340
DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1341
1342
/*
1343
mspace_realloc behaves as realloc, but operates within
1344
the given space.
1345
1346
If compiled with FOOTERS==1, mspace_realloc is not actually
1347
needed. realloc may be called instead of mspace_realloc because
1348
realloced chunks from any space are handled by their originating
1349
spaces.
1350
*/
1351
DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1352
1353
/*
1354
mspace_calloc behaves as calloc, but operates within
1355
the given space.
1356
*/
1357
DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1358
1359
/*
1360
mspace_memalign behaves as memalign, but operates within
1361
the given space.
1362
*/
1363
DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1364
1365
/*
1366
mspace_independent_calloc behaves as independent_calloc, but
1367
operates within the given space.
1368
*/
1369
DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1370
size_t elem_size, void* chunks[]);
1371
1372
/*
1373
mspace_independent_comalloc behaves as independent_comalloc, but
1374
operates within the given space.
1375
*/
1376
DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1377
size_t sizes[], void* chunks[]);
1378
1379
/*
1380
mspace_footprint() returns the number of bytes obtained from the
1381
system for this space.
1382
*/
1383
DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1384
1385
/*
1386
mspace_max_footprint() returns the peak number of bytes obtained from the
1387
system for this space.
1388
*/
1389
DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1390
1391
1392
#if !NO_MALLINFO
1393
/*
1394
mspace_mallinfo behaves as mallinfo, but reports properties of
1395
the given space.
1396
*/
1397
DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1398
#endif /* NO_MALLINFO */
1399
1400
/*
1401
malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1402
*/
1403
DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1404
1405
/*
1406
mspace_malloc_stats behaves as malloc_stats, but reports
1407
properties of the given space.
1408
*/
1409
DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1410
1411
/*
1412
mspace_trim behaves as malloc_trim, but
1413
operates within the given space.
1414
*/
1415
DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1416
1417
/*
1418
An alias for mallopt.
1419
*/
1420
DLMALLOC_EXPORT int mspace_mallopt(int, int);
1421
1422
#endif /* MSPACES */
1423
1424
#ifdef __cplusplus
1425
} /* end of extern "C" */
1426
#endif /* __cplusplus */
1427
1428
/*
1429
========================================================================
1430
To make a fully customizable malloc.h header file, cut everything
1431
above this line, put into file malloc.h, edit to suit, and #include it
1432
on the next line, as well as in programs that use this malloc.
1433
========================================================================
1434
*/
1435
1436
/* #include "malloc.h" */
1437
1438
/*------------------------------ internal #includes ---------------------- */
1439
1440
#ifdef _MSC_VER
1441
#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1442
#endif /* _MSC_VER */
1443
#if !NO_MALLOC_STATS
1444
#include <stdio.h> /* for printing in malloc_stats */
1445
#endif /* NO_MALLOC_STATS */
1446
#ifndef LACKS_ERRNO_H
1447
#include <errno.h> /* for MALLOC_FAILURE_ACTION */
1448
#endif /* LACKS_ERRNO_H */
1449
#ifdef DEBUG
1450
#if ABORT_ON_ASSERT_FAILURE
1451
#undef assert
1452
#define assert(x) if(!(x)) ABORT
1453
#else /* ABORT_ON_ASSERT_FAILURE */
1454
#include <assert.h>
1455
#endif /* ABORT_ON_ASSERT_FAILURE */
1456
#else /* DEBUG */
1457
#ifndef assert
1458
#define assert(x)
1459
#endif
1460
#define DEBUG 0
1461
#endif /* DEBUG */
1462
#if !defined(WIN32) && !defined(LACKS_TIME_H)
1463
#include <time.h> /* for magic initialization */
1464
#endif /* WIN32 */
1465
#ifndef LACKS_STDLIB_H
1466
#include <stdlib.h> /* for abort() */
1467
#endif /* LACKS_STDLIB_H */
1468
#ifndef LACKS_STRING_H
1469
#include <string.h> /* for memset etc */
1470
#endif /* LACKS_STRING_H */
1471
#if USE_BUILTIN_FFS
1472
#ifndef LACKS_STRINGS_H
1473
#include <strings.h> /* for ffs */
1474
#endif /* LACKS_STRINGS_H */
1475
#endif /* USE_BUILTIN_FFS */
1476
#if HAVE_MMAP
1477
#ifndef LACKS_SYS_MMAN_H
1478
/* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1479
#if (defined(linux) && !defined(__USE_GNU))
1480
#define __USE_GNU 1
1481
#include <sys/mman.h> /* for mmap */
1482
#undef __USE_GNU
1483
#else
1484
#include <sys/mman.h> /* for mmap */
1485
#endif /* linux */
1486
#endif /* LACKS_SYS_MMAN_H */
1487
#ifndef LACKS_FCNTL_H
1488
#include <fcntl.h>
1489
#endif /* LACKS_FCNTL_H */
1490
#endif /* HAVE_MMAP */
1491
#ifndef LACKS_UNISTD_H
1492
#include <unistd.h> /* for sbrk, sysconf */
1493
#else /* LACKS_UNISTD_H */
1494
#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1495
extern void* sbrk(ptrdiff_t);
1496
#endif /* FreeBSD etc */
1497
#endif /* LACKS_UNISTD_H */
1498
1499
/* Declarations for locking */
1500
#if USE_LOCKS
1501
#ifndef WIN32
1502
#if defined (__SVR4) && defined (__sun) /* solaris */
1503
#include <thread.h>
1504
#elif !defined(LACKS_SCHED_H)
1505
#include <sched.h>
1506
#endif /* solaris or LACKS_SCHED_H */
1507
#if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1508
#include <pthread.h>
1509
#endif /* USE_RECURSIVE_LOCKS ... */
1510
#elif defined(_MSC_VER)
1511
#ifndef _M_AMD64
1512
/* These are already defined on AMD64 builds */
1513
#ifdef __cplusplus
1514
extern "C" {
1515
#endif /* __cplusplus */
1516
LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1517
LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1518
#ifdef __cplusplus
1519
}
1520
#endif /* __cplusplus */
1521
#endif /* _M_AMD64 */
1522
#pragma intrinsic (_InterlockedCompareExchange)
1523
#pragma intrinsic (_InterlockedExchange)
1524
#define interlockedcompareexchange _InterlockedCompareExchange
1525
#define interlockedexchange _InterlockedExchange
1526
#elif defined(WIN32) && defined(__GNUC__)
1527
#define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1528
#define interlockedexchange __sync_lock_test_and_set
1529
#endif /* Win32 */
1530
#else /* USE_LOCKS */
1531
#endif /* USE_LOCKS */
1532
1533
#ifndef LOCK_AT_FORK
1534
#define LOCK_AT_FORK 0
1535
#endif
1536
1537
/* Declarations for bit scanning on win32 */
1538
#if defined(_MSC_VER) && _MSC_VER>=1300
1539
#ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1540
#ifdef __cplusplus
1541
extern "C" {
1542
#endif /* __cplusplus */
1543
unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1544
unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1545
#ifdef __cplusplus
1546
}
1547
#endif /* __cplusplus */
1548
1549
#define BitScanForward _BitScanForward
1550
#define BitScanReverse _BitScanReverse
1551
#pragma intrinsic(_BitScanForward)
1552
#pragma intrinsic(_BitScanReverse)
1553
#endif /* BitScanForward */
1554
#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1555
1556
#ifndef WIN32
1557
#ifndef malloc_getpagesize
1558
# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1559
# ifndef _SC_PAGE_SIZE
1560
# define _SC_PAGE_SIZE _SC_PAGESIZE
1561
# endif
1562
# endif
1563
# ifdef _SC_PAGE_SIZE
1564
# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1565
# else
1566
# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1567
extern size_t getpagesize();
1568
# define malloc_getpagesize getpagesize()
1569
# else
1570
# ifdef WIN32 /* use supplied emulation of getpagesize */
1571
# define malloc_getpagesize getpagesize()
1572
# else
1573
# ifndef LACKS_SYS_PARAM_H
1574
# include <sys/param.h>
1575
# endif
1576
# ifdef EXEC_PAGESIZE
1577
# define malloc_getpagesize EXEC_PAGESIZE
1578
# else
1579
# ifdef NBPG
1580
# ifndef CLSIZE
1581
# define malloc_getpagesize NBPG
1582
# else
1583
# define malloc_getpagesize (NBPG * CLSIZE)
1584
# endif
1585
# else
1586
# ifdef NBPC
1587
# define malloc_getpagesize NBPC
1588
# else
1589
# ifdef PAGESIZE
1590
# define malloc_getpagesize PAGESIZE
1591
# else /* just guess */
1592
# define malloc_getpagesize ((size_t)4096U)
1593
# endif
1594
# endif
1595
# endif
1596
# endif
1597
# endif
1598
# endif
1599
# endif
1600
#endif
1601
#endif
1602
1603
/* ------------------- size_t and alignment properties -------------------- */
1604
1605
/* The byte and bit size of a size_t */
1606
#define SIZE_T_SIZE (sizeof(size_t))
1607
#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1608
1609
/* Some constants coerced to size_t */
1610
/* Annoying but necessary to avoid errors on some platforms */
1611
#define SIZE_T_ZERO ((size_t)0)
1612
#define SIZE_T_ONE ((size_t)1)
1613
#define SIZE_T_TWO ((size_t)2)
1614
#define SIZE_T_FOUR ((size_t)4)
1615
#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1616
#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1617
#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1618
#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1619
1620
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1621
#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1622
1623
/* True if address a has acceptable alignment */
1624
#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1625
1626
/* the number of bytes to offset an address to align it */
1627
#define align_offset(A)\
1628
((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1629
((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1630
1631
/* -------------------------- MMAP preliminaries ------------------------- */
1632
1633
/*
1634
If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1635
checks to fail so compiler optimizer can delete code rather than
1636
using so many "#if"s.
1637
*/
1638
1639
1640
/* MORECORE and MMAP must return MFAIL on failure */
1641
#define MFAIL ((void*)(MAX_SIZE_T))
1642
#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1643
1644
#if HAVE_MMAP
1645
1646
#ifndef WIN32
1647
#define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1648
#define MMAP_PROT (PROT_READ|PROT_WRITE)
1649
#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1650
#define MAP_ANONYMOUS MAP_ANON
1651
#endif /* MAP_ANON */
1652
#ifdef MAP_ANONYMOUS
1653
#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1654
#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1655
#else /* MAP_ANONYMOUS */
1656
/*
1657
Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1658
is unlikely to be needed, but is supplied just in case.
1659
*/
1660
#define MMAP_FLAGS (MAP_PRIVATE)
1661
static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1662
#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1663
(dev_zero_fd = open("/dev/zero", O_RDWR), \
1664
mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1665
mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1666
#endif /* MAP_ANONYMOUS */
1667
1668
#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1669
1670
#else /* WIN32 */
1671
1672
/* Win32 MMAP via VirtualAlloc */
1673
static FORCEINLINE void* win32mmap(size_t size) {
1674
void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1675
return (ptr != 0)? ptr: MFAIL;
1676
}
1677
1678
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1679
static FORCEINLINE void* win32direct_mmap(size_t size) {
1680
void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1681
PAGE_READWRITE);
1682
return (ptr != 0)? ptr: MFAIL;
1683
}
1684
1685
/* This function supports releasing coalesed segments */
1686
static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1687
MEMORY_BASIC_INFORMATION minfo;
1688
char* cptr = (char*)ptr;
1689
while (size) {
1690
if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1691
return -1;
1692
if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1693
minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1694
return -1;
1695
if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1696
return -1;
1697
cptr += minfo.RegionSize;
1698
size -= minfo.RegionSize;
1699
}
1700
return 0;
1701
}
1702
1703
#define MMAP_DEFAULT(s) win32mmap(s)
1704
#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1705
#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1706
#endif /* WIN32 */
1707
#endif /* HAVE_MMAP */
1708
1709
#if HAVE_MREMAP
1710
#ifndef WIN32
1711
#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1712
#endif /* WIN32 */
1713
#endif /* HAVE_MREMAP */
1714
1715
/**
1716
* Define CALL_MORECORE
1717
*/
1718
#if HAVE_MORECORE
1719
#ifdef MORECORE
1720
#define CALL_MORECORE(S) MORECORE(S)
1721
#else /* MORECORE */
1722
#define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1723
#endif /* MORECORE */
1724
#else /* HAVE_MORECORE */
1725
#define CALL_MORECORE(S) MFAIL
1726
#endif /* HAVE_MORECORE */
1727
1728
/**
1729
* Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1730
*/
1731
#if HAVE_MMAP
1732
#define USE_MMAP_BIT (SIZE_T_ONE)
1733
1734
#ifdef MMAP
1735
#define CALL_MMAP(s) MMAP(s)
1736
#else /* MMAP */
1737
#define CALL_MMAP(s) MMAP_DEFAULT(s)
1738
#endif /* MMAP */
1739
#ifdef MUNMAP
1740
#define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1741
#else /* MUNMAP */
1742
#define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1743
#endif /* MUNMAP */
1744
#ifdef DIRECT_MMAP
1745
#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1746
#else /* DIRECT_MMAP */
1747
#define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1748
#endif /* DIRECT_MMAP */
1749
#else /* HAVE_MMAP */
1750
#define USE_MMAP_BIT (SIZE_T_ZERO)
1751
1752
#define MMAP(s) MFAIL
1753
#define MUNMAP(a, s) (-1)
1754
#define DIRECT_MMAP(s) MFAIL
1755
#define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1756
#define CALL_MMAP(s) MMAP(s)
1757
#define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1758
#endif /* HAVE_MMAP */
1759
1760
/**
1761
* Define CALL_MREMAP
1762
*/
1763
#if HAVE_MMAP && HAVE_MREMAP
1764
#ifdef MREMAP
1765
#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1766
#else /* MREMAP */
1767
#define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1768
#endif /* MREMAP */
1769
#else /* HAVE_MMAP && HAVE_MREMAP */
1770
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1771
#endif /* HAVE_MMAP && HAVE_MREMAP */
1772
1773
/* mstate bit set if continguous morecore disabled or failed */
1774
#define USE_NONCONTIGUOUS_BIT (4U)
1775
1776
/* segment bit set in create_mspace_with_base */
1777
#define EXTERN_BIT (8U)
1778
1779
1780
/* --------------------------- Lock preliminaries ------------------------ */
1781
1782
/*
1783
When locks are defined, there is one global lock, plus
1784
one per-mspace lock.
1785
1786
The global lock_ensures that mparams.magic and other unique
1787
mparams values are initialized only once. It also protects
1788
sequences of calls to MORECORE. In many cases sys_alloc requires
1789
two calls, that should not be interleaved with calls by other
1790
threads. This does not protect against direct calls to MORECORE
1791
by other threads not using this lock, so there is still code to
1792
cope the best we can on interference.
1793
1794
Per-mspace locks surround calls to malloc, free, etc.
1795
By default, locks are simple non-reentrant mutexes.
1796
1797
Because lock-protected regions generally have bounded times, it is
1798
OK to use the supplied simple spinlocks. Spinlocks are likely to
1799
improve performance for lightly contended applications, but worsen
1800
performance under heavy contention.
1801
1802
If USE_LOCKS is > 1, the definitions of lock routines here are
1803
bypassed, in which case you will need to define the type MLOCK_T,
1804
and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1805
and TRY_LOCK. You must also declare a
1806
static MLOCK_T malloc_global_mutex = { initialization values };.
1807
1808
*/
1809
1810
#if !USE_LOCKS
1811
#define USE_LOCK_BIT (0U)
1812
#define INITIAL_LOCK(l) (0)
1813
#define DESTROY_LOCK(l) (0)
1814
#define ACQUIRE_MALLOC_GLOBAL_LOCK()
1815
#define RELEASE_MALLOC_GLOBAL_LOCK()
1816
1817
#else
1818
#if USE_LOCKS > 1
1819
/* ----------------------- User-defined locks ------------------------ */
1820
/* Define your own lock implementation here */
1821
/* #define INITIAL_LOCK(lk) ... */
1822
/* #define DESTROY_LOCK(lk) ... */
1823
/* #define ACQUIRE_LOCK(lk) ... */
1824
/* #define RELEASE_LOCK(lk) ... */
1825
/* #define TRY_LOCK(lk) ... */
1826
/* static MLOCK_T malloc_global_mutex = ... */
1827
1828
#elif USE_SPIN_LOCKS
1829
1830
/* First, define CAS_LOCK and CLEAR_LOCK on ints */
1831
/* Note CAS_LOCK defined to return 0 on success */
1832
1833
#if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1834
#define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1835
#define CLEAR_LOCK(sl) __sync_lock_release(sl)
1836
1837
#elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1838
/* Custom spin locks for older gcc on x86 */
1839
static FORCEINLINE int x86_cas_lock(int *sl) {
1840
int ret;
1841
int val = 1;
1842
int cmp = 0;
1843
__asm__ __volatile__ ("lock; cmpxchgl %1, %2"
1844
: "=a" (ret)
1845
: "r" (val), "m" (*(sl)), "0"(cmp)
1846
: "memory", "cc");
1847
return ret;
1848
}
1849
1850
static FORCEINLINE void x86_clear_lock(int* sl) {
1851
assert(*sl != 0);
1852
int prev = 0;
1853
int ret;
1854
__asm__ __volatile__ ("lock; xchgl %0, %1"
1855
: "=r" (ret)
1856
: "m" (*(sl)), "0"(prev)
1857
: "memory");
1858
}
1859
1860
#define CAS_LOCK(sl) x86_cas_lock(sl)
1861
#define CLEAR_LOCK(sl) x86_clear_lock(sl)
1862
1863
#else /* Win32 MSC */
1864
#define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1865
#define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1866
1867
#endif /* ... gcc spins locks ... */
1868
1869
/* How to yield for a spin lock */
1870
#define SPINS_PER_YIELD 63
1871
#if defined(_MSC_VER)
1872
#define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1873
#define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1874
#elif defined (__SVR4) && defined (__sun) /* solaris */
1875
#define SPIN_LOCK_YIELD thr_yield();
1876
#elif !defined(LACKS_SCHED_H)
1877
#define SPIN_LOCK_YIELD sched_yield();
1878
#else
1879
#define SPIN_LOCK_YIELD
1880
#endif /* ... yield ... */
1881
1882
#if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1883
/* Plain spin locks use single word (embedded in malloc_states) */
1884
static int spin_acquire_lock(int *sl) {
1885
int spins = 0;
1886
while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1887
if ((++spins & SPINS_PER_YIELD) == 0) {
1888
SPIN_LOCK_YIELD;
1889
}
1890
}
1891
return 0;
1892
}
1893
1894
#define MLOCK_T int
1895
#define TRY_LOCK(sl) !CAS_LOCK(sl)
1896
#define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1897
#define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1898
#define INITIAL_LOCK(sl) (*sl = 0)
1899
#define DESTROY_LOCK(sl) (0)
1900
static MLOCK_T malloc_global_mutex = 0;
1901
1902
#else /* USE_RECURSIVE_LOCKS */
1903
/* types for lock owners */
1904
#ifdef WIN32
1905
#define THREAD_ID_T DWORD
1906
#define CURRENT_THREAD GetCurrentThreadId()
1907
#define EQ_OWNER(X,Y) ((X) == (Y))
1908
#else
1909
/*
1910
Note: the following assume that pthread_t is a type that can be
1911
initialized to (casted) zero. If this is not the case, you will need to
1912
somehow redefine these or not use spin locks.
1913
*/
1914
#define THREAD_ID_T pthread_t
1915
#define CURRENT_THREAD pthread_self()
1916
#define EQ_OWNER(X,Y) pthread_equal(X, Y)
1917
#endif
1918
1919
struct malloc_recursive_lock {
1920
int sl;
1921
unsigned int c;
1922
THREAD_ID_T threadid;
1923
};
1924
1925
#define MLOCK_T struct malloc_recursive_lock
1926
static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1927
1928
static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1929
assert(lk->sl != 0);
1930
if (--lk->c == 0) {
1931
CLEAR_LOCK(&lk->sl);
1932
}
1933
}
1934
1935
static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1936
THREAD_ID_T mythreadid = CURRENT_THREAD;
1937
int spins = 0;
1938
for (;;) {
1939
if (*((volatile int *)(&lk->sl)) == 0) {
1940
if (!CAS_LOCK(&lk->sl)) {
1941
lk->threadid = mythreadid;
1942
lk->c = 1;
1943
return 0;
1944
}
1945
}
1946
else if (EQ_OWNER(lk->threadid, mythreadid)) {
1947
++lk->c;
1948
return 0;
1949
}
1950
if ((++spins & SPINS_PER_YIELD) == 0) {
1951
SPIN_LOCK_YIELD;
1952
}
1953
}
1954
}
1955
1956
static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1957
THREAD_ID_T mythreadid = CURRENT_THREAD;
1958
if (*((volatile int *)(&lk->sl)) == 0) {
1959
if (!CAS_LOCK(&lk->sl)) {
1960
lk->threadid = mythreadid;
1961
lk->c = 1;
1962
return 1;
1963
}
1964
}
1965
else if (EQ_OWNER(lk->threadid, mythreadid)) {
1966
++lk->c;
1967
return 1;
1968
}
1969
return 0;
1970
}
1971
1972
#define RELEASE_LOCK(lk) recursive_release_lock(lk)
1973
#define TRY_LOCK(lk) recursive_try_lock(lk)
1974
#define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1975
#define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1976
#define DESTROY_LOCK(lk) (0)
1977
#endif /* USE_RECURSIVE_LOCKS */
1978
1979
#elif defined(WIN32) /* Win32 critical sections */
1980
#define MLOCK_T CRITICAL_SECTION
1981
#define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1982
#define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1983
#define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1984
#define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1985
#define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1986
#define NEED_GLOBAL_LOCK_INIT
1987
1988
static MLOCK_T malloc_global_mutex;
1989
static volatile LONG malloc_global_mutex_status;
1990
1991
/* Use spin loop to initialize global lock */
1992
static void init_malloc_global_mutex() {
1993
for (;;) {
1994
long stat = malloc_global_mutex_status;
1995
if (stat > 0)
1996
return;
1997
/* transition to < 0 while initializing, then to > 0) */
1998
if (stat == 0 &&
1999
interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
2000
InitializeCriticalSection(&malloc_global_mutex);
2001
interlockedexchange(&malloc_global_mutex_status, (LONG)1);
2002
return;
2003
}
2004
SleepEx(0, FALSE);
2005
}
2006
}
2007
2008
#else /* pthreads-based locks */
2009
#define MLOCK_T pthread_mutex_t
2010
#define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2011
#define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2012
#define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2013
#define INITIAL_LOCK(lk) pthread_init_lock(lk)
2014
#define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2015
2016
#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2017
/* Cope with old-style linux recursive lock initialization by adding */
2018
/* skipped internal declaration from pthread.h */
2019
extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2020
int __kind));
2021
#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2022
#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2023
#endif /* USE_RECURSIVE_LOCKS ... */
2024
2025
static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2026
2027
static int pthread_init_lock (MLOCK_T *lk) {
2028
pthread_mutexattr_t attr;
2029
if (pthread_mutexattr_init(&attr)) return 1;
2030
#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2031
if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2032
#endif
2033
if (pthread_mutex_init(lk, &attr)) return 1;
2034
if (pthread_mutexattr_destroy(&attr)) return 1;
2035
return 0;
2036
}
2037
2038
#endif /* ... lock types ... */
2039
2040
/* Common code for all lock types */
2041
#define USE_LOCK_BIT (2U)
2042
2043
#ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2044
#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2045
#endif
2046
2047
#ifndef RELEASE_MALLOC_GLOBAL_LOCK
2048
#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2049
#endif
2050
2051
#endif /* USE_LOCKS */
2052
2053
/* ----------------------- Chunk representations ------------------------ */
2054
2055
/*
2056
(The following includes lightly edited explanations by Colin Plumb.)
2057
2058
The malloc_chunk declaration below is misleading (but accurate and
2059
necessary). It declares a "view" into memory allowing access to
2060
necessary fields at known offsets from a given base.
2061
2062
Chunks of memory are maintained using a `boundary tag' method as
2063
originally described by Knuth. (See the paper by Paul Wilson
2064
ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2065
techniques.) Sizes of free chunks are stored both in the front of
2066
each chunk and at the end. This makes consolidating fragmented
2067
chunks into bigger chunks fast. The head fields also hold bits
2068
representing whether chunks are free or in use.
2069
2070
Here are some pictures to make it clearer. They are "exploded" to
2071
show that the state of a chunk can be thought of as extending from
2072
the high 31 bits of the head field of its header through the
2073
prev_foot and PINUSE_BIT bit of the following chunk header.
2074
2075
A chunk that's in use looks like:
2076
2077
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2078
| Size of previous chunk (if P = 0) |
2079
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2080
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2081
| Size of this chunk 1| +-+
2082
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2083
| |
2084
+- -+
2085
| |
2086
+- -+
2087
| :
2088
+- size - sizeof(size_t) available payload bytes -+
2089
: |
2090
chunk-> +- -+
2091
| |
2092
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2093
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2094
| Size of next chunk (may or may not be in use) | +-+
2095
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2096
2097
And if it's free, it looks like this:
2098
2099
chunk-> +- -+
2100
| User payload (must be in use, or we would have merged!) |
2101
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2102
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2103
| Size of this chunk 0| +-+
2104
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2105
| Next pointer |
2106
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2107
| Prev pointer |
2108
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2109
| :
2110
+- size - sizeof(struct chunk) unused bytes -+
2111
: |
2112
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2113
| Size of this chunk |
2114
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2115
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2116
| Size of next chunk (must be in use, or we would have merged)| +-+
2117
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2118
| :
2119
+- User payload -+
2120
: |
2121
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2122
|0|
2123
+-+
2124
Note that since we always merge adjacent free chunks, the chunks
2125
adjacent to a free chunk must be in use.
2126
2127
Given a pointer to a chunk (which can be derived trivially from the
2128
payload pointer) we can, in O(1) time, find out whether the adjacent
2129
chunks are free, and if so, unlink them from the lists that they
2130
are on and merge them with the current chunk.
2131
2132
Chunks always begin on even word boundaries, so the mem portion
2133
(which is returned to the user) is also on an even word boundary, and
2134
thus at least double-word aligned.
2135
2136
The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2137
chunk size (which is always a multiple of two words), is an in-use
2138
bit for the *previous* chunk. If that bit is *clear*, then the
2139
word before the current chunk size contains the previous chunk
2140
size, and can be used to find the front of the previous chunk.
2141
The very first chunk allocated always has this bit set, preventing
2142
access to non-existent (or non-owned) memory. If pinuse is set for
2143
any given chunk, then you CANNOT determine the size of the
2144
previous chunk, and might even get a memory addressing fault when
2145
trying to do so.
2146
2147
The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2148
the chunk size redundantly records whether the current chunk is
2149
inuse (unless the chunk is mmapped). This redundancy enables usage
2150
checks within free and realloc, and reduces indirection when freeing
2151
and consolidating chunks.
2152
2153
Each freshly allocated chunk must have both cinuse and pinuse set.
2154
That is, each allocated chunk borders either a previously allocated
2155
and still in-use chunk, or the base of its memory arena. This is
2156
ensured by making all allocations from the `lowest' part of any
2157
found chunk. Further, no free chunk physically borders another one,
2158
so each free chunk is known to be preceded and followed by either
2159
inuse chunks or the ends of memory.
2160
2161
Note that the `foot' of the current chunk is actually represented
2162
as the prev_foot of the NEXT chunk. This makes it easier to
2163
deal with alignments etc but can be very confusing when trying
2164
to extend or adapt this code.
2165
2166
The exceptions to all this are
2167
2168
1. The special chunk `top' is the top-most available chunk (i.e.,
2169
the one bordering the end of available memory). It is treated
2170
specially. Top is never included in any bin, is used only if
2171
no other chunk is available, and is released back to the
2172
system if it is very large (see M_TRIM_THRESHOLD). In effect,
2173
the top chunk is treated as larger (and thus less well
2174
fitting) than any other available chunk. The top chunk
2175
doesn't update its trailing size field since there is no next
2176
contiguous chunk that would have to index off it. However,
2177
space is still allocated for it (TOP_FOOT_SIZE) to enable
2178
separation or merging when space is extended.
2179
2180
3. Chunks allocated via mmap, have both cinuse and pinuse bits
2181
cleared in their head fields. Because they are allocated
2182
one-by-one, each must carry its own prev_foot field, which is
2183
also used to hold the offset this chunk has within its mmapped
2184
region, which is needed to preserve alignment. Each mmapped
2185
chunk is trailed by the first two fields of a fake next-chunk
2186
for sake of usage checks.
2187
2188
*/
2189
2190
struct malloc_chunk {
2191
size_t prev_foot; /* Size of previous chunk (if free). */
2192
size_t head; /* Size and inuse bits. */
2193
struct malloc_chunk* fd; /* double links -- used only if free. */
2194
struct malloc_chunk* bk;
2195
};
2196
2197
typedef struct malloc_chunk mchunk;
2198
typedef struct malloc_chunk* mchunkptr;
2199
typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
2200
typedef unsigned int bindex_t; /* Described below */
2201
typedef unsigned int binmap_t; /* Described below */
2202
typedef unsigned int flag_t; /* The type of various bit flag sets */
2203
2204
/* ------------------- Chunks sizes and alignments ----------------------- */
2205
2206
#define MCHUNK_SIZE (sizeof(mchunk))
2207
2208
#if FOOTERS
2209
#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2210
#else /* FOOTERS */
2211
#define CHUNK_OVERHEAD (SIZE_T_SIZE)
2212
#endif /* FOOTERS */
2213
2214
/* MMapped chunks need a second word of overhead ... */
2215
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2216
/* ... and additional padding for fake next-chunk at foot */
2217
#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2218
2219
/* The smallest size we can malloc is an aligned minimal chunk */
2220
#define MIN_CHUNK_SIZE\
2221
((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2222
2223
/* conversion from malloc headers to user pointers, and back */
2224
#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2225
#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2226
/* chunk associated with aligned address A */
2227
#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2228
2229
/* Bounds on request (not chunk) sizes. */
2230
#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2231
#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2232
2233
/* pad request bytes into a usable size */
2234
#define pad_request(req) \
2235
(((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2236
2237
/* pad request, checking for minimum (but not maximum) */
2238
#define request2size(req) \
2239
(((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2240
2241
2242
/* ------------------ Operations on head and foot fields ----------------- */
2243
2244
/*
2245
The head field of a chunk is or'ed with PINUSE_BIT when previous
2246
adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2247
use, unless mmapped, in which case both bits are cleared.
2248
2249
FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2250
*/
2251
2252
#define PINUSE_BIT (SIZE_T_ONE)
2253
#define CINUSE_BIT (SIZE_T_TWO)
2254
#define FLAG4_BIT (SIZE_T_FOUR)
2255
#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2256
#define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2257
2258
/* Head value for fenceposts */
2259
#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2260
2261
/* extraction of fields from head words */
2262
#define cinuse(p) ((p)->head & CINUSE_BIT)
2263
#define pinuse(p) ((p)->head & PINUSE_BIT)
2264
#define flag4inuse(p) ((p)->head & FLAG4_BIT)
2265
#define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2266
#define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2267
2268
#define chunksize(p) ((p)->head & ~(FLAG_BITS))
2269
2270
#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2271
#define set_flag4(p) ((p)->head |= FLAG4_BIT)
2272
#define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2273
2274
/* Treat space at ptr +/- offset as a chunk */
2275
#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2276
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2277
2278
/* Ptr to next or previous physical malloc_chunk. */
2279
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2280
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2281
2282
/* extract next chunk's pinuse bit */
2283
#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2284
2285
/* Get/set size at footer */
2286
#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2287
#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2288
2289
/* Set size, pinuse bit, and foot */
2290
#define set_size_and_pinuse_of_free_chunk(p, s)\
2291
((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2292
2293
/* Set size, pinuse bit, foot, and clear next pinuse */
2294
#define set_free_with_pinuse(p, s, n)\
2295
(clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2296
2297
/* Get the internal overhead associated with chunk p */
2298
#define overhead_for(p)\
2299
(is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2300
2301
/* Return true if malloced space is not necessarily cleared */
2302
#if MMAP_CLEARS
2303
#define calloc_must_clear(p) (!is_mmapped(p))
2304
#else /* MMAP_CLEARS */
2305
#define calloc_must_clear(p) (1)
2306
#endif /* MMAP_CLEARS */
2307
2308
/* ---------------------- Overlaid data structures ----------------------- */
2309
2310
/*
2311
When chunks are not in use, they are treated as nodes of either
2312
lists or trees.
2313
2314
"Small" chunks are stored in circular doubly-linked lists, and look
2315
like this:
2316
2317
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2318
| Size of previous chunk |
2319
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2320
`head:' | Size of chunk, in bytes |P|
2321
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2322
| Forward pointer to next chunk in list |
2323
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2324
| Back pointer to previous chunk in list |
2325
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2326
| Unused space (may be 0 bytes long) .
2327
. .
2328
. |
2329
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2330
`foot:' | Size of chunk, in bytes |
2331
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2332
2333
Larger chunks are kept in a form of bitwise digital trees (aka
2334
tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2335
free chunks greater than 256 bytes, their size doesn't impose any
2336
constraints on user chunk sizes. Each node looks like:
2337
2338
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2339
| Size of previous chunk |
2340
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341
`head:' | Size of chunk, in bytes |P|
2342
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2343
| Forward pointer to next chunk of same size |
2344
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2345
| Back pointer to previous chunk of same size |
2346
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2347
| Pointer to left child (child[0]) |
2348
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2349
| Pointer to right child (child[1]) |
2350
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2351
| Pointer to parent |
2352
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2353
| bin index of this chunk |
2354
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2355
| Unused space .
2356
. |
2357
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2358
`foot:' | Size of chunk, in bytes |
2359
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2360
2361
Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2362
of the same size are arranged in a circularly-linked list, with only
2363
the oldest chunk (the next to be used, in our FIFO ordering)
2364
actually in the tree. (Tree members are distinguished by a non-null
2365
parent pointer.) If a chunk with the same size an an existing node
2366
is inserted, it is linked off the existing node using pointers that
2367
work in the same way as fd/bk pointers of small chunks.
2368
2369
Each tree contains a power of 2 sized range of chunk sizes (the
2370
smallest is 0x100 <= x < 0x180), which is is divided in half at each
2371
tree level, with the chunks in the smaller half of the range (0x100
2372
<= x < 0x140 for the top nose) in the left subtree and the larger
2373
half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2374
done by inspecting individual bits.
2375
2376
Using these rules, each node's left subtree contains all smaller
2377
sizes than its right subtree. However, the node at the root of each
2378
subtree has no particular ordering relationship to either. (The
2379
dividing line between the subtree sizes is based on trie relation.)
2380
If we remove the last chunk of a given size from the interior of the
2381
tree, we need to replace it with a leaf node. The tree ordering
2382
rules permit a node to be replaced by any leaf below it.
2383
2384
The smallest chunk in a tree (a common operation in a best-fit
2385
allocator) can be found by walking a path to the leftmost leaf in
2386
the tree. Unlike a usual binary tree, where we follow left child
2387
pointers until we reach a null, here we follow the right child
2388
pointer any time the left one is null, until we reach a leaf with
2389
both child pointers null. The smallest chunk in the tree will be
2390
somewhere along that path.
2391
2392
The worst case number of steps to add, find, or remove a node is
2393
bounded by the number of bits differentiating chunks within
2394
bins. Under current bin calculations, this ranges from 6 up to 21
2395
(for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2396
is of course much better.
2397
*/
2398
2399
struct malloc_tree_chunk {
2400
/* The first four fields must be compatible with malloc_chunk */
2401
size_t prev_foot;
2402
size_t head;
2403
struct malloc_tree_chunk* fd;
2404
struct malloc_tree_chunk* bk;
2405
2406
struct malloc_tree_chunk* child[2];
2407
struct malloc_tree_chunk* parent;
2408
bindex_t index;
2409
};
2410
2411
typedef struct malloc_tree_chunk tchunk;
2412
typedef struct malloc_tree_chunk* tchunkptr;
2413
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2414
2415
/* A little helper macro for trees */
2416
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2417
2418
/* ----------------------------- Segments -------------------------------- */
2419
2420
/*
2421
Each malloc space may include non-contiguous segments, held in a
2422
list headed by an embedded malloc_segment record representing the
2423
top-most space. Segments also include flags holding properties of
2424
the space. Large chunks that are directly allocated by mmap are not
2425
included in this list. They are instead independently created and
2426
destroyed without otherwise keeping track of them.
2427
2428
Segment management mainly comes into play for spaces allocated by
2429
MMAP. Any call to MMAP might or might not return memory that is
2430
adjacent to an existing segment. MORECORE normally contiguously
2431
extends the current space, so this space is almost always adjacent,
2432
which is simpler and faster to deal with. (This is why MORECORE is
2433
used preferentially to MMAP when both are available -- see
2434
sys_alloc.) When allocating using MMAP, we don't use any of the
2435
hinting mechanisms (inconsistently) supported in various
2436
implementations of unix mmap, or distinguish reserving from
2437
committing memory. Instead, we just ask for space, and exploit
2438
contiguity when we get it. It is probably possible to do
2439
better than this on some systems, but no general scheme seems
2440
to be significantly better.
2441
2442
Management entails a simpler variant of the consolidation scheme
2443
used for chunks to reduce fragmentation -- new adjacent memory is
2444
normally prepended or appended to an existing segment. However,
2445
there are limitations compared to chunk consolidation that mostly
2446
reflect the fact that segment processing is relatively infrequent
2447
(occurring only when getting memory from system) and that we
2448
don't expect to have huge numbers of segments:
2449
2450
* Segments are not indexed, so traversal requires linear scans. (It
2451
would be possible to index these, but is not worth the extra
2452
overhead and complexity for most programs on most platforms.)
2453
* New segments are only appended to old ones when holding top-most
2454
memory; if they cannot be prepended to others, they are held in
2455
different segments.
2456
2457
Except for the top-most segment of an mstate, each segment record
2458
is kept at the tail of its segment. Segments are added by pushing
2459
segment records onto the list headed by &mstate.seg for the
2460
containing mstate.
2461
2462
Segment flags control allocation/merge/deallocation policies:
2463
* If EXTERN_BIT set, then we did not allocate this segment,
2464
and so should not try to deallocate or merge with others.
2465
(This currently holds only for the initial segment passed
2466
into create_mspace_with_base.)
2467
* If USE_MMAP_BIT set, the segment may be merged with
2468
other surrounding mmapped segments and trimmed/de-allocated
2469
using munmap.
2470
* If neither bit is set, then the segment was obtained using
2471
MORECORE so can be merged with surrounding MORECORE'd segments
2472
and deallocated/trimmed using MORECORE with negative arguments.
2473
*/
2474
2475
struct malloc_segment {
2476
char* base; /* base address */
2477
size_t size; /* allocated size */
2478
struct malloc_segment* next; /* ptr to next segment */
2479
flag_t sflags; /* mmap and extern flag */
2480
};
2481
2482
#define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2483
#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2484
2485
typedef struct malloc_segment msegment;
2486
typedef struct malloc_segment* msegmentptr;
2487
2488
/* ---------------------------- malloc_state ----------------------------- */
2489
2490
/*
2491
A malloc_state holds all of the bookkeeping for a space.
2492
The main fields are:
2493
2494
Top
2495
The topmost chunk of the currently active segment. Its size is
2496
cached in topsize. The actual size of topmost space is
2497
topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2498
fenceposts and segment records if necessary when getting more
2499
space from the system. The size at which to autotrim top is
2500
cached from mparams in trim_check, except that it is disabled if
2501
an autotrim fails.
2502
2503
Designated victim (dv)
2504
This is the preferred chunk for servicing small requests that
2505
don't have exact fits. It is normally the chunk split off most
2506
recently to service another small request. Its size is cached in
2507
dvsize. The link fields of this chunk are not maintained since it
2508
is not kept in a bin.
2509
2510
SmallBins
2511
An array of bin headers for free chunks. These bins hold chunks
2512
with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2513
chunks of all the same size, spaced 8 bytes apart. To simplify
2514
use in double-linked lists, each bin header acts as a malloc_chunk
2515
pointing to the real first node, if it exists (else pointing to
2516
itself). This avoids special-casing for headers. But to avoid
2517
waste, we allocate only the fd/bk pointers of bins, and then use
2518
repositioning tricks to treat these as the fields of a chunk.
2519
2520
TreeBins
2521
Treebins are pointers to the roots of trees holding a range of
2522
sizes. There are 2 equally spaced treebins for each power of two
2523
from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2524
larger.
2525
2526
Bin maps
2527
There is one bit map for small bins ("smallmap") and one for
2528
treebins ("treemap). Each bin sets its bit when non-empty, and
2529
clears the bit when empty. Bit operations are then used to avoid
2530
bin-by-bin searching -- nearly all "search" is done without ever
2531
looking at bins that won't be selected. The bit maps
2532
conservatively use 32 bits per map word, even if on 64bit system.
2533
For a good description of some of the bit-based techniques used
2534
here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2535
supplement at http://hackersdelight.org/). Many of these are
2536
intended to reduce the branchiness of paths through malloc etc, as
2537
well as to reduce the number of memory locations read or written.
2538
2539
Segments
2540
A list of segments headed by an embedded malloc_segment record
2541
representing the initial space.
2542
2543
Address check support
2544
The least_addr field is the least address ever obtained from
2545
MORECORE or MMAP. Attempted frees and reallocs of any address less
2546
than this are trapped (unless INSECURE is defined).
2547
2548
Magic tag
2549
A cross-check field that should always hold same value as mparams.magic.
2550
2551
Max allowed footprint
2552
The maximum allowed bytes to allocate from system (zero means no limit)
2553
2554
Flags
2555
Bits recording whether to use MMAP, locks, or contiguous MORECORE
2556
2557
Statistics
2558
Each space keeps track of current and maximum system memory
2559
obtained via MORECORE or MMAP.
2560
2561
Trim support
2562
Fields holding the amount of unused topmost memory that should trigger
2563
trimming, and a counter to force periodic scanning to release unused
2564
non-topmost segments.
2565
2566
Locking
2567
If USE_LOCKS is defined, the "mutex" lock is acquired and released
2568
around every public call using this mspace.
2569
2570
Extension support
2571
A void* pointer and a size_t field that can be used to help implement
2572
extensions to this malloc.
2573
*/
2574
2575
/* Bin types, widths and sizes */
2576
#define NSMALLBINS (32U)
2577
#define NTREEBINS (32U)
2578
#define SMALLBIN_SHIFT (3U)
2579
#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2580
#define TREEBIN_SHIFT (8U)
2581
#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2582
#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2583
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2584
2585
struct malloc_state {
2586
binmap_t smallmap;
2587
binmap_t treemap;
2588
size_t dvsize;
2589
size_t topsize;
2590
char* least_addr;
2591
mchunkptr dv;
2592
mchunkptr top;
2593
size_t trim_check;
2594
size_t release_checks;
2595
size_t magic;
2596
mchunkptr smallbins[(NSMALLBINS+1)*2];
2597
tbinptr treebins[NTREEBINS];
2598
size_t footprint;
2599
size_t max_footprint;
2600
size_t footprint_limit; /* zero means no limit */
2601
flag_t mflags;
2602
#if USE_LOCKS
2603
MLOCK_T mutex; /* locate lock among fields that rarely change */
2604
#endif /* USE_LOCKS */
2605
msegment seg;
2606
void* extp; /* Unused but available for extensions */
2607
size_t exts;
2608
};
2609
2610
typedef struct malloc_state* mstate;
2611
2612
/* ------------- Global malloc_state and malloc_params ------------------- */
2613
2614
/*
2615
malloc_params holds global properties, including those that can be
2616
dynamically set using mallopt. There is a single instance, mparams,
2617
initialized in init_mparams. Note that the non-zeroness of "magic"
2618
also serves as an initialization flag.
2619
*/
2620
2621
struct malloc_params {
2622
size_t magic;
2623
size_t page_size;
2624
size_t granularity;
2625
size_t mmap_threshold;
2626
size_t trim_threshold;
2627
flag_t default_mflags;
2628
};
2629
2630
static struct malloc_params mparams;
2631
2632
/* Ensure mparams initialized */
2633
#define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2634
2635
#if !ONLY_MSPACES
2636
2637
/* The global malloc_state used for all non-"mspace" calls */
2638
static struct malloc_state _gm_;
2639
#define gm (&_gm_)
2640
#define is_global(M) ((M) == &_gm_)
2641
2642
#endif /* !ONLY_MSPACES */
2643
2644
#define is_initialized(M) ((M)->top != 0)
2645
2646
/* -------------------------- system alloc setup ------------------------- */
2647
2648
/* Operations on mflags */
2649
2650
#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2651
#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2652
#if USE_LOCKS
2653
#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2654
#else
2655
#define disable_lock(M)
2656
#endif
2657
2658
#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2659
#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2660
#if HAVE_MMAP
2661
#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2662
#else
2663
#define disable_mmap(M)
2664
#endif
2665
2666
#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2667
#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2668
2669
#define set_lock(M,L)\
2670
((M)->mflags = (L)?\
2671
((M)->mflags | USE_LOCK_BIT) :\
2672
((M)->mflags & ~USE_LOCK_BIT))
2673
2674
/* page-align a size */
2675
#define page_align(S)\
2676
(((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2677
2678
/* granularity-align a size */
2679
#define granularity_align(S)\
2680
(((S) + (mparams.granularity - SIZE_T_ONE))\
2681
& ~(mparams.granularity - SIZE_T_ONE))
2682
2683
2684
/* For mmap, use granularity alignment on windows, else page-align */
2685
#ifdef WIN32
2686
#define mmap_align(S) granularity_align(S)
2687
#else
2688
#define mmap_align(S) page_align(S)
2689
#endif
2690
2691
/* For sys_alloc, enough padding to ensure can malloc request on success */
2692
#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2693
2694
#define is_page_aligned(S)\
2695
(((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2696
#define is_granularity_aligned(S)\
2697
(((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2698
2699
/* True if segment S holds address A */
2700
#define segment_holds(S, A)\
2701
((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2702
2703
/* Return segment holding given address */
2704
static msegmentptr segment_holding(mstate m, char* addr) {
2705
msegmentptr sp = &m->seg;
2706
for (;;) {
2707
if (addr >= sp->base && addr < sp->base + sp->size)
2708
return sp;
2709
if ((sp = sp->next) == 0)
2710
return 0;
2711
}
2712
}
2713
2714
/* Return true if segment contains a segment link */
2715
static int has_segment_link(mstate m, msegmentptr ss) {
2716
msegmentptr sp = &m->seg;
2717
for (;;) {
2718
if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2719
return 1;
2720
if ((sp = sp->next) == 0)
2721
return 0;
2722
}
2723
}
2724
2725
#ifndef MORECORE_CANNOT_TRIM
2726
#define should_trim(M,s) ((s) > (M)->trim_check)
2727
#else /* MORECORE_CANNOT_TRIM */
2728
#define should_trim(M,s) (0)
2729
#endif /* MORECORE_CANNOT_TRIM */
2730
2731
/*
2732
TOP_FOOT_SIZE is padding at the end of a segment, including space
2733
that may be needed to place segment records and fenceposts when new
2734
noncontiguous segments are added.
2735
*/
2736
#define TOP_FOOT_SIZE\
2737
(align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2738
2739
2740
/* ------------------------------- Hooks -------------------------------- */
2741
2742
/*
2743
PREACTION should be defined to return 0 on success, and nonzero on
2744
failure. If you are not using locking, you can redefine these to do
2745
anything you like.
2746
*/
2747
2748
#if USE_LOCKS
2749
#define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2750
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2751
#else /* USE_LOCKS */
2752
2753
#ifndef PREACTION
2754
#define PREACTION(M) (0)
2755
#endif /* PREACTION */
2756
2757
#ifndef POSTACTION
2758
#define POSTACTION(M)
2759
#endif /* POSTACTION */
2760
2761
#endif /* USE_LOCKS */
2762
2763
/*
2764
CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2765
USAGE_ERROR_ACTION is triggered on detected bad frees and
2766
reallocs. The argument p is an address that might have triggered the
2767
fault. It is ignored by the two predefined actions, but might be
2768
useful in custom actions that try to help diagnose errors.
2769
*/
2770
2771
#if PROCEED_ON_ERROR
2772
2773
/* A count of the number of corruption errors causing resets */
2774
int malloc_corruption_error_count;
2775
2776
/* default corruption action */
2777
static void reset_on_error(mstate m);
2778
2779
#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2780
#define USAGE_ERROR_ACTION(m, p)
2781
2782
#else /* PROCEED_ON_ERROR */
2783
2784
#ifndef CORRUPTION_ERROR_ACTION
2785
#define CORRUPTION_ERROR_ACTION(m) ABORT
2786
#endif /* CORRUPTION_ERROR_ACTION */
2787
2788
#ifndef USAGE_ERROR_ACTION
2789
#define USAGE_ERROR_ACTION(m,p) ABORT
2790
#endif /* USAGE_ERROR_ACTION */
2791
2792
#endif /* PROCEED_ON_ERROR */
2793
2794
2795
/* -------------------------- Debugging setup ---------------------------- */
2796
2797
#if ! DEBUG
2798
2799
#define check_free_chunk(M,P)
2800
#define check_inuse_chunk(M,P)
2801
#define check_malloced_chunk(M,P,N)
2802
#define check_mmapped_chunk(M,P)
2803
#define check_malloc_state(M)
2804
#define check_top_chunk(M,P)
2805
2806
#else /* DEBUG */
2807
#define check_free_chunk(M,P) do_check_free_chunk(M,P)
2808
#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2809
#define check_top_chunk(M,P) do_check_top_chunk(M,P)
2810
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2811
#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2812
#define check_malloc_state(M) do_check_malloc_state(M)
2813
2814
static void do_check_any_chunk(mstate m, mchunkptr p);
2815
static void do_check_top_chunk(mstate m, mchunkptr p);
2816
static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2817
static void do_check_inuse_chunk(mstate m, mchunkptr p);
2818
static void do_check_free_chunk(mstate m, mchunkptr p);
2819
static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2820
static void do_check_tree(mstate m, tchunkptr t);
2821
static void do_check_treebin(mstate m, bindex_t i);
2822
static void do_check_smallbin(mstate m, bindex_t i);
2823
static void do_check_malloc_state(mstate m);
2824
static int bin_find(mstate m, mchunkptr x);
2825
static size_t traverse_and_check(mstate m);
2826
#endif /* DEBUG */
2827
2828
/* ---------------------------- Indexing Bins ---------------------------- */
2829
2830
#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2831
#define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2832
#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2833
#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2834
2835
/* addressing by index. See above about smallbin repositioning */
2836
#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2837
#define treebin_at(M,i) (&((M)->treebins[i]))
2838
2839
/* assign tree index for size S to variable I. Use x86 asm if possible */
2840
#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2841
#define compute_tree_index(S, I)\
2842
{\
2843
unsigned int X = S >> TREEBIN_SHIFT;\
2844
if (X == 0)\
2845
I = 0;\
2846
else if (X > 0xFFFF)\
2847
I = NTREEBINS-1;\
2848
else {\
2849
unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2850
I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2851
}\
2852
}
2853
2854
#elif defined (__INTEL_COMPILER)
2855
#define compute_tree_index(S, I)\
2856
{\
2857
size_t X = S >> TREEBIN_SHIFT;\
2858
if (X == 0)\
2859
I = 0;\
2860
else if (X > 0xFFFF)\
2861
I = NTREEBINS-1;\
2862
else {\
2863
unsigned int K = _bit_scan_reverse (X); \
2864
I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2865
}\
2866
}
2867
2868
#elif defined(_MSC_VER) && _MSC_VER>=1300
2869
#define compute_tree_index(S, I)\
2870
{\
2871
size_t X = S >> TREEBIN_SHIFT;\
2872
if (X == 0)\
2873
I = 0;\
2874
else if (X > 0xFFFF)\
2875
I = NTREEBINS-1;\
2876
else {\
2877
unsigned int K;\
2878
_BitScanReverse((DWORD *) &K, (DWORD) X);\
2879
I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2880
}\
2881
}
2882
2883
#else /* GNUC */
2884
#define compute_tree_index(S, I)\
2885
{\
2886
size_t X = S >> TREEBIN_SHIFT;\
2887
if (X == 0)\
2888
I = 0;\
2889
else if (X > 0xFFFF)\
2890
I = NTREEBINS-1;\
2891
else {\
2892
unsigned int Y = (unsigned int)X;\
2893
unsigned int N = ((Y - 0x100) >> 16) & 8;\
2894
unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2895
N += K;\
2896
N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2897
K = 14 - N + ((Y <<= K) >> 15);\
2898
I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2899
}\
2900
}
2901
#endif /* GNUC */
2902
2903
/* Bit representing maximum resolved size in a treebin at i */
2904
#define bit_for_tree_index(i) \
2905
(i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2906
2907
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2908
#define leftshift_for_tree_index(i) \
2909
((i == NTREEBINS-1)? 0 : \
2910
((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2911
2912
/* The size of the smallest chunk held in bin with index i */
2913
#define minsize_for_tree_index(i) \
2914
((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2915
(((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2916
2917
2918
/* ------------------------ Operations on bin maps ----------------------- */
2919
2920
/* bit corresponding to given index */
2921
#define idx2bit(i) ((binmap_t)(1) << (i))
2922
2923
/* Mark/Clear bits with given index */
2924
#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2925
#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2926
#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2927
2928
#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2929
#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2930
#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2931
2932
/* isolate the least set bit of a bitmap */
2933
#define least_bit(x) ((x) & -(x))
2934
2935
/* mask with all bits to left of least bit of x on */
2936
#define left_bits(x) ((x<<1) | -(x<<1))
2937
2938
/* mask with all bits to left of or equal to least bit of x on */
2939
#define same_or_left_bits(x) ((x) | -(x))
2940
2941
/* index corresponding to given bit. Use x86 asm if possible */
2942
2943
#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2944
#define compute_bit2idx(X, I)\
2945
{\
2946
unsigned int J;\
2947
J = __builtin_ctz(X); \
2948
I = (bindex_t)J;\
2949
}
2950
2951
#elif defined (__INTEL_COMPILER)
2952
#define compute_bit2idx(X, I)\
2953
{\
2954
unsigned int J;\
2955
J = _bit_scan_forward (X); \
2956
I = (bindex_t)J;\
2957
}
2958
2959
#elif defined(_MSC_VER) && _MSC_VER>=1300
2960
#define compute_bit2idx(X, I)\
2961
{\
2962
unsigned int J;\
2963
_BitScanForward((DWORD *) &J, X);\
2964
I = (bindex_t)J;\
2965
}
2966
2967
#elif USE_BUILTIN_FFS
2968
#define compute_bit2idx(X, I) I = ffs(X)-1
2969
2970
#else
2971
#define compute_bit2idx(X, I)\
2972
{\
2973
unsigned int Y = X - 1;\
2974
unsigned int K = Y >> (16-4) & 16;\
2975
unsigned int N = K; Y >>= K;\
2976
N += K = Y >> (8-3) & 8; Y >>= K;\
2977
N += K = Y >> (4-2) & 4; Y >>= K;\
2978
N += K = Y >> (2-1) & 2; Y >>= K;\
2979
N += K = Y >> (1-0) & 1; Y >>= K;\
2980
I = (bindex_t)(N + Y);\
2981
}
2982
#endif /* GNUC */
2983
2984
2985
/* ----------------------- Runtime Check Support ------------------------- */
2986
2987
/*
2988
For security, the main invariant is that malloc/free/etc never
2989
writes to a static address other than malloc_state, unless static
2990
malloc_state itself has been corrupted, which cannot occur via
2991
malloc (because of these checks). In essence this means that we
2992
believe all pointers, sizes, maps etc held in malloc_state, but
2993
check all of those linked or offsetted from other embedded data
2994
structures. These checks are interspersed with main code in a way
2995
that tends to minimize their run-time cost.
2996
2997
When FOOTERS is defined, in addition to range checking, we also
2998
verify footer fields of inuse chunks, which can be used guarantee
2999
that the mstate controlling malloc/free is intact. This is a
3000
streamlined version of the approach described by William Robertson
3001
et al in "Run-time Detection of Heap-based Overflows" LISA'03
3002
http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3003
of an inuse chunk holds the xor of its mstate and a random seed,
3004
that is checked upon calls to free() and realloc(). This is
3005
(probabalistically) unguessable from outside the program, but can be
3006
computed by any code successfully malloc'ing any chunk, so does not
3007
itself provide protection against code that has already broken
3008
security through some other means. Unlike Robertson et al, we
3009
always dynamically check addresses of all offset chunks (previous,
3010
next, etc). This turns out to be cheaper than relying on hashes.
3011
*/
3012
3013
#if !INSECURE
3014
/* Check if address a is at least as high as any from MORECORE or MMAP */
3015
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3016
/* Check if address of next chunk n is higher than base chunk p */
3017
#define ok_next(p, n) ((char*)(p) < (char*)(n))
3018
/* Check if p has inuse status */
3019
#define ok_inuse(p) is_inuse(p)
3020
/* Check if p has its pinuse bit on */
3021
#define ok_pinuse(p) pinuse(p)
3022
3023
#else /* !INSECURE */
3024
#define ok_address(M, a) (1)
3025
#define ok_next(b, n) (1)
3026
#define ok_inuse(p) (1)
3027
#define ok_pinuse(p) (1)
3028
#endif /* !INSECURE */
3029
3030
#if (FOOTERS && !INSECURE)
3031
/* Check if (alleged) mstate m has expected magic field */
3032
#define ok_magic(M) ((M)->magic == mparams.magic)
3033
#else /* (FOOTERS && !INSECURE) */
3034
#define ok_magic(M) (1)
3035
#endif /* (FOOTERS && !INSECURE) */
3036
3037
/* In gcc, use __builtin_expect to minimize impact of checks */
3038
#if !INSECURE
3039
#if defined(__GNUC__) && __GNUC__ >= 3
3040
#define RTCHECK(e) __builtin_expect(e, 1)
3041
#else /* GNUC */
3042
#define RTCHECK(e) (e)
3043
#endif /* GNUC */
3044
#else /* !INSECURE */
3045
#define RTCHECK(e) (1)
3046
#endif /* !INSECURE */
3047
3048
/* macros to set up inuse chunks with or without footers */
3049
3050
#if !FOOTERS
3051
3052
#define mark_inuse_foot(M,p,s)
3053
3054
/* Macros for setting head/foot of non-mmapped chunks */
3055
3056
/* Set cinuse bit and pinuse bit of next chunk */
3057
#define set_inuse(M,p,s)\
3058
((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3059
((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3060
3061
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3062
#define set_inuse_and_pinuse(M,p,s)\
3063
((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3064
((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3065
3066
/* Set size, cinuse and pinuse bit of this chunk */
3067
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3068
((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3069
3070
#else /* FOOTERS */
3071
3072
/* Set foot of inuse chunk to be xor of mstate and seed */
3073
#define mark_inuse_foot(M,p,s)\
3074
(((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3075
3076
#define get_mstate_for(p)\
3077
((mstate)(((mchunkptr)((char*)(p) +\
3078
(chunksize(p))))->prev_foot ^ mparams.magic))
3079
3080
#define set_inuse(M,p,s)\
3081
((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3082
(((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3083
mark_inuse_foot(M,p,s))
3084
3085
#define set_inuse_and_pinuse(M,p,s)\
3086
((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3087
(((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3088
mark_inuse_foot(M,p,s))
3089
3090
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3091
((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3092
mark_inuse_foot(M, p, s))
3093
3094
#endif /* !FOOTERS */
3095
3096
/* ---------------------------- setting mparams -------------------------- */
3097
3098
#if LOCK_AT_FORK
3099
static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); }
3100
static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
3101
static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); }
3102
#endif /* LOCK_AT_FORK */
3103
3104
/* Initialize mparams */
3105
static int init_mparams(void) {
3106
#ifdef NEED_GLOBAL_LOCK_INIT
3107
if (malloc_global_mutex_status <= 0)
3108
init_malloc_global_mutex();
3109
#endif
3110
3111
ACQUIRE_MALLOC_GLOBAL_LOCK();
3112
if (mparams.magic == 0) {
3113
size_t magic;
3114
size_t psize;
3115
size_t gsize;
3116
3117
#ifndef WIN32
3118
psize = malloc_getpagesize;
3119
gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3120
#else /* WIN32 */
3121
{
3122
SYSTEM_INFO system_info;
3123
GetSystemInfo(&system_info);
3124
psize = system_info.dwPageSize;
3125
gsize = ((DEFAULT_GRANULARITY != 0)?
3126
DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3127
}
3128
#endif /* WIN32 */
3129
3130
/* Sanity-check configuration:
3131
size_t must be unsigned and as wide as pointer type.
3132
ints must be at least 4 bytes.
3133
alignment must be at least 8.
3134
Alignment, min chunk size, and page size must all be powers of 2.
3135
*/
3136
if ((sizeof(size_t) != sizeof(char*)) ||
3137
(MAX_SIZE_T < MIN_CHUNK_SIZE) ||
3138
(sizeof(int) < 4) ||
3139
(MALLOC_ALIGNMENT < (size_t)8U) ||
3140
((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3141
((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
3142
((gsize & (gsize-SIZE_T_ONE)) != 0) ||
3143
((psize & (psize-SIZE_T_ONE)) != 0))
3144
ABORT;
3145
mparams.granularity = gsize;
3146
mparams.page_size = psize;
3147
mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3148
mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3149
#if MORECORE_CONTIGUOUS
3150
mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3151
#else /* MORECORE_CONTIGUOUS */
3152
mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3153
#endif /* MORECORE_CONTIGUOUS */
3154
3155
#if !ONLY_MSPACES
3156
/* Set up lock for main malloc area */
3157
gm->mflags = mparams.default_mflags;
3158
(void)INITIAL_LOCK(&gm->mutex);
3159
#endif
3160
#if LOCK_AT_FORK
3161
pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3162
#endif
3163
3164
{
3165
#if USE_DEV_RANDOM
3166
int fd;
3167
unsigned char buf[sizeof(size_t)];
3168
/* Try to use /dev/urandom, else fall back on using time */
3169
if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3170
read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3171
magic = *((size_t *) buf);
3172
close(fd);
3173
}
3174
else
3175
#endif /* USE_DEV_RANDOM */
3176
#ifdef WIN32
3177
magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3178
#elif defined(LACKS_TIME_H)
3179
magic = (size_t)&magic ^ (size_t)0x55555555U;
3180
#else
3181
magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3182
#endif
3183
magic |= (size_t)8U; /* ensure nonzero */
3184
magic &= ~(size_t)7U; /* improve chances of fault for bad values */
3185
/* Until memory modes commonly available, use volatile-write */
3186
(*(volatile size_t *)(&(mparams.magic))) = magic;
3187
}
3188
}
3189
3190
RELEASE_MALLOC_GLOBAL_LOCK();
3191
return 1;
3192
}
3193
3194
/* support for mallopt */
3195
static int change_mparam(int param_number, int value) {
3196
size_t val;
3197
ensure_initialization();
3198
val = (value == -1)? MAX_SIZE_T : (size_t)value;
3199
switch(param_number) {
3200
case M_TRIM_THRESHOLD:
3201
mparams.trim_threshold = val;
3202
return 1;
3203
case M_GRANULARITY:
3204
if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3205
mparams.granularity = val;
3206
return 1;
3207
}
3208
else
3209
return 0;
3210
case M_MMAP_THRESHOLD:
3211
mparams.mmap_threshold = val;
3212
return 1;
3213
default:
3214
return 0;
3215
}
3216
}
3217
3218
#if DEBUG
3219
/* ------------------------- Debugging Support --------------------------- */
3220
3221
/* Check properties of any chunk, whether free, inuse, mmapped etc */
3222
static void do_check_any_chunk(mstate m, mchunkptr p) {
3223
assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3224
assert(ok_address(m, p));
3225
}
3226
3227
/* Check properties of top chunk */
3228
static void do_check_top_chunk(mstate m, mchunkptr p) {
3229
msegmentptr sp = segment_holding(m, (char*)p);
3230
size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3231
assert(sp != 0);
3232
assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3233
assert(ok_address(m, p));
3234
assert(sz == m->topsize);
3235
assert(sz > 0);
3236
assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3237
assert(pinuse(p));
3238
assert(!pinuse(chunk_plus_offset(p, sz)));
3239
}
3240
3241
/* Check properties of (inuse) mmapped chunks */
3242
static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3243
size_t sz = chunksize(p);
3244
size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3245
assert(is_mmapped(p));
3246
assert(use_mmap(m));
3247
assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3248
assert(ok_address(m, p));
3249
assert(!is_small(sz));
3250
assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3251
assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3252
assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3253
}
3254
3255
/* Check properties of inuse chunks */
3256
static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3257
do_check_any_chunk(m, p);
3258
assert(is_inuse(p));
3259
assert(next_pinuse(p));
3260
/* If not pinuse and not mmapped, previous chunk has OK offset */
3261
assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3262
if (is_mmapped(p))
3263
do_check_mmapped_chunk(m, p);
3264
}
3265
3266
/* Check properties of free chunks */
3267
static void do_check_free_chunk(mstate m, mchunkptr p) {
3268
size_t sz = chunksize(p);
3269
mchunkptr next = chunk_plus_offset(p, sz);
3270
do_check_any_chunk(m, p);
3271
assert(!is_inuse(p));
3272
assert(!next_pinuse(p));
3273
assert (!is_mmapped(p));
3274
if (p != m->dv && p != m->top) {
3275
if (sz >= MIN_CHUNK_SIZE) {
3276
assert((sz & CHUNK_ALIGN_MASK) == 0);
3277
assert(is_aligned(chunk2mem(p)));
3278
assert(next->prev_foot == sz);
3279
assert(pinuse(p));
3280
assert (next == m->top || is_inuse(next));
3281
assert(p->fd->bk == p);
3282
assert(p->bk->fd == p);
3283
}
3284
else /* markers are always of size SIZE_T_SIZE */
3285
assert(sz == SIZE_T_SIZE);
3286
}
3287
}
3288
3289
/* Check properties of malloced chunks at the point they are malloced */
3290
static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3291
if (mem != 0) {
3292
mchunkptr p = mem2chunk(mem);
3293
size_t sz = p->head & ~INUSE_BITS;
3294
do_check_inuse_chunk(m, p);
3295
assert((sz & CHUNK_ALIGN_MASK) == 0);
3296
assert(sz >= MIN_CHUNK_SIZE);
3297
assert(sz >= s);
3298
/* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3299
assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3300
}
3301
}
3302
3303
/* Check a tree and its subtrees. */
3304
static void do_check_tree(mstate m, tchunkptr t) {
3305
tchunkptr head = 0;
3306
tchunkptr u = t;
3307
bindex_t tindex = t->index;
3308
size_t tsize = chunksize(t);
3309
bindex_t idx;
3310
compute_tree_index(tsize, idx);
3311
assert(tindex == idx);
3312
assert(tsize >= MIN_LARGE_SIZE);
3313
assert(tsize >= minsize_for_tree_index(idx));
3314
assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3315
3316
do { /* traverse through chain of same-sized nodes */
3317
do_check_any_chunk(m, ((mchunkptr)u));
3318
assert(u->index == tindex);
3319
assert(chunksize(u) == tsize);
3320
assert(!is_inuse(u));
3321
assert(!next_pinuse(u));
3322
assert(u->fd->bk == u);
3323
assert(u->bk->fd == u);
3324
if (u->parent == 0) {
3325
assert(u->child[0] == 0);
3326
assert(u->child[1] == 0);
3327
}
3328
else {
3329
assert(head == 0); /* only one node on chain has parent */
3330
head = u;
3331
assert(u->parent != u);
3332
assert (u->parent->child[0] == u ||
3333
u->parent->child[1] == u ||
3334
*((tbinptr*)(u->parent)) == u);
3335
if (u->child[0] != 0) {
3336
assert(u->child[0]->parent == u);
3337
assert(u->child[0] != u);
3338
do_check_tree(m, u->child[0]);
3339
}
3340
if (u->child[1] != 0) {
3341
assert(u->child[1]->parent == u);
3342
assert(u->child[1] != u);
3343
do_check_tree(m, u->child[1]);
3344
}
3345
if (u->child[0] != 0 && u->child[1] != 0) {
3346
assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3347
}
3348
}
3349
u = u->fd;
3350
} while (u != t);
3351
assert(head != 0);
3352
}
3353
3354
/* Check all the chunks in a treebin. */
3355
static void do_check_treebin(mstate m, bindex_t i) {
3356
tbinptr* tb = treebin_at(m, i);
3357
tchunkptr t = *tb;
3358
int empty = (m->treemap & (1U << i)) == 0;
3359
if (t == 0)
3360
assert(empty);
3361
if (!empty)
3362
do_check_tree(m, t);
3363
}
3364
3365
/* Check all the chunks in a smallbin. */
3366
static void do_check_smallbin(mstate m, bindex_t i) {
3367
sbinptr b = smallbin_at(m, i);
3368
mchunkptr p = b->bk;
3369
unsigned int empty = (m->smallmap & (1U << i)) == 0;
3370
if (p == b)
3371
assert(empty);
3372
if (!empty) {
3373
for (; p != b; p = p->bk) {
3374
size_t size = chunksize(p);
3375
mchunkptr q;
3376
/* each chunk claims to be free */
3377
do_check_free_chunk(m, p);
3378
/* chunk belongs in bin */
3379
assert(small_index(size) == i);
3380
assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3381
/* chunk is followed by an inuse chunk */
3382
q = next_chunk(p);
3383
if (q->head != FENCEPOST_HEAD)
3384
do_check_inuse_chunk(m, q);
3385
}
3386
}
3387
}
3388
3389
/* Find x in a bin. Used in other check functions. */
3390
static int bin_find(mstate m, mchunkptr x) {
3391
size_t size = chunksize(x);
3392
if (is_small(size)) {
3393
bindex_t sidx = small_index(size);
3394
sbinptr b = smallbin_at(m, sidx);
3395
if (smallmap_is_marked(m, sidx)) {
3396
mchunkptr p = b;
3397
do {
3398
if (p == x)
3399
return 1;
3400
} while ((p = p->fd) != b);
3401
}
3402
}
3403
else {
3404
bindex_t tidx;
3405
compute_tree_index(size, tidx);
3406
if (treemap_is_marked(m, tidx)) {
3407
tchunkptr t = *treebin_at(m, tidx);
3408
size_t sizebits = size << leftshift_for_tree_index(tidx);
3409
while (t != 0 && chunksize(t) != size) {
3410
t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3411
sizebits <<= 1;
3412
}
3413
if (t != 0) {
3414
tchunkptr u = t;
3415
do {
3416
if (u == (tchunkptr)x)
3417
return 1;
3418
} while ((u = u->fd) != t);
3419
}
3420
}
3421
}
3422
return 0;
3423
}
3424
3425
/* Traverse each chunk and check it; return total */
3426
static size_t traverse_and_check(mstate m) {
3427
size_t sum = 0;
3428
if (is_initialized(m)) {
3429
msegmentptr s = &m->seg;
3430
sum += m->topsize + TOP_FOOT_SIZE;
3431
while (s != 0) {
3432
mchunkptr q = align_as_chunk(s->base);
3433
mchunkptr lastq = 0;
3434
assert(pinuse(q));
3435
while (segment_holds(s, q) &&
3436
q != m->top && q->head != FENCEPOST_HEAD) {
3437
sum += chunksize(q);
3438
if (is_inuse(q)) {
3439
assert(!bin_find(m, q));
3440
do_check_inuse_chunk(m, q);
3441
}
3442
else {
3443
assert(q == m->dv || bin_find(m, q));
3444
assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3445
do_check_free_chunk(m, q);
3446
}
3447
lastq = q;
3448
q = next_chunk(q);
3449
}
3450
s = s->next;
3451
}
3452
}
3453
return sum;
3454
}
3455
3456
3457
/* Check all properties of malloc_state. */
3458
static void do_check_malloc_state(mstate m) {
3459
bindex_t i;
3460
size_t total;
3461
/* check bins */
3462
for (i = 0; i < NSMALLBINS; ++i)
3463
do_check_smallbin(m, i);
3464
for (i = 0; i < NTREEBINS; ++i)
3465
do_check_treebin(m, i);
3466
3467
if (m->dvsize != 0) { /* check dv chunk */
3468
do_check_any_chunk(m, m->dv);
3469
assert(m->dvsize == chunksize(m->dv));
3470
assert(m->dvsize >= MIN_CHUNK_SIZE);
3471
assert(bin_find(m, m->dv) == 0);
3472
}
3473
3474
if (m->top != 0) { /* check top chunk */
3475
do_check_top_chunk(m, m->top);
3476
/*assert(m->topsize == chunksize(m->top)); redundant */
3477
assert(m->topsize > 0);
3478
assert(bin_find(m, m->top) == 0);
3479
}
3480
3481
total = traverse_and_check(m);
3482
assert(total <= m->footprint);
3483
assert(m->footprint <= m->max_footprint);
3484
}
3485
#endif /* DEBUG */
3486
3487
/* ----------------------------- statistics ------------------------------ */
3488
3489
#if !NO_MALLINFO
3490
static struct mallinfo internal_mallinfo(mstate m) {
3491
struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3492
ensure_initialization();
3493
if (!PREACTION(m)) {
3494
check_malloc_state(m);
3495
if (is_initialized(m)) {
3496
size_t nfree = SIZE_T_ONE; /* top always free */
3497
size_t mfree = m->topsize + TOP_FOOT_SIZE;
3498
size_t sum = mfree;
3499
msegmentptr s = &m->seg;
3500
while (s != 0) {
3501
mchunkptr q = align_as_chunk(s->base);
3502
while (segment_holds(s, q) &&
3503
q != m->top && q->head != FENCEPOST_HEAD) {
3504
size_t sz = chunksize(q);
3505
sum += sz;
3506
if (!is_inuse(q)) {
3507
mfree += sz;
3508
++nfree;
3509
}
3510
q = next_chunk(q);
3511
}
3512
s = s->next;
3513
}
3514
3515
nm.arena = sum;
3516
nm.ordblks = nfree;
3517
nm.hblkhd = m->footprint - sum;
3518
nm.usmblks = m->max_footprint;
3519
nm.uordblks = m->footprint - mfree;
3520
nm.fordblks = mfree;
3521
nm.keepcost = m->topsize;
3522
}
3523
3524
POSTACTION(m);
3525
}
3526
return nm;
3527
}
3528
#endif /* !NO_MALLINFO */
3529
3530
#if !NO_MALLOC_STATS
3531
static void internal_malloc_stats(mstate m) {
3532
ensure_initialization();
3533
if (!PREACTION(m)) {
3534
size_t maxfp = 0;
3535
size_t fp = 0;
3536
size_t used = 0;
3537
check_malloc_state(m);
3538
if (is_initialized(m)) {
3539
msegmentptr s = &m->seg;
3540
maxfp = m->max_footprint;
3541
fp = m->footprint;
3542
used = fp - (m->topsize + TOP_FOOT_SIZE);
3543
3544
while (s != 0) {
3545
mchunkptr q = align_as_chunk(s->base);
3546
while (segment_holds(s, q) &&
3547
q != m->top && q->head != FENCEPOST_HEAD) {
3548
if (!is_inuse(q))
3549
used -= chunksize(q);
3550
q = next_chunk(q);
3551
}
3552
s = s->next;
3553
}
3554
}
3555
POSTACTION(m); /* drop lock */
3556
fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3557
fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3558
fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3559
}
3560
}
3561
#endif /* NO_MALLOC_STATS */
3562
3563
/* ----------------------- Operations on smallbins ----------------------- */
3564
3565
/*
3566
Various forms of linking and unlinking are defined as macros. Even
3567
the ones for trees, which are very long but have very short typical
3568
paths. This is ugly but reduces reliance on inlining support of
3569
compilers.
3570
*/
3571
3572
/* Link a free chunk into a smallbin */
3573
#define insert_small_chunk(M, P, S) {\
3574
bindex_t I = small_index(S);\
3575
mchunkptr B = smallbin_at(M, I);\
3576
mchunkptr F = B;\
3577
assert(S >= MIN_CHUNK_SIZE);\
3578
if (!smallmap_is_marked(M, I))\
3579
mark_smallmap(M, I);\
3580
else if (RTCHECK(ok_address(M, B->fd)))\
3581
F = B->fd;\
3582
else {\
3583
CORRUPTION_ERROR_ACTION(M);\
3584
}\
3585
B->fd = P;\
3586
F->bk = P;\
3587
P->fd = F;\
3588
P->bk = B;\
3589
}
3590
3591
/* Unlink a chunk from a smallbin */
3592
#define unlink_small_chunk(M, P, S) {\
3593
mchunkptr F = P->fd;\
3594
mchunkptr B = P->bk;\
3595
bindex_t I = small_index(S);\
3596
assert(P != B);\
3597
assert(P != F);\
3598
assert(chunksize(P) == small_index2size(I));\
3599
if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3600
if (B == F) {\
3601
clear_smallmap(M, I);\
3602
}\
3603
else if (RTCHECK(B == smallbin_at(M,I) ||\
3604
(ok_address(M, B) && B->fd == P))) {\
3605
F->bk = B;\
3606
B->fd = F;\
3607
}\
3608
else {\
3609
CORRUPTION_ERROR_ACTION(M);\
3610
}\
3611
}\
3612
else {\
3613
CORRUPTION_ERROR_ACTION(M);\
3614
}\
3615
}
3616
3617
/* Unlink the first chunk from a smallbin */
3618
#define unlink_first_small_chunk(M, B, P, I) {\
3619
mchunkptr F = P->fd;\
3620
assert(P != B);\
3621
assert(P != F);\
3622
assert(chunksize(P) == small_index2size(I));\
3623
if (B == F) {\
3624
clear_smallmap(M, I);\
3625
}\
3626
else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3627
F->bk = B;\
3628
B->fd = F;\
3629
}\
3630
else {\
3631
CORRUPTION_ERROR_ACTION(M);\
3632
}\
3633
}
3634
3635
/* Replace dv node, binning the old one */
3636
/* Used only when dvsize known to be small */
3637
#define replace_dv(M, P, S) {\
3638
size_t DVS = M->dvsize;\
3639
assert(is_small(DVS));\
3640
if (DVS != 0) {\
3641
mchunkptr DV = M->dv;\
3642
insert_small_chunk(M, DV, DVS);\
3643
}\
3644
M->dvsize = S;\
3645
M->dv = P;\
3646
}
3647
3648
/* ------------------------- Operations on trees ------------------------- */
3649
3650
/* Insert chunk into tree */
3651
#define insert_large_chunk(M, X, S) {\
3652
tbinptr* H;\
3653
bindex_t I;\
3654
compute_tree_index(S, I);\
3655
H = treebin_at(M, I);\
3656
X->index = I;\
3657
X->child[0] = X->child[1] = 0;\
3658
if (!treemap_is_marked(M, I)) {\
3659
mark_treemap(M, I);\
3660
*H = X;\
3661
X->parent = (tchunkptr)H;\
3662
X->fd = X->bk = X;\
3663
}\
3664
else {\
3665
tchunkptr T = *H;\
3666
size_t K = S << leftshift_for_tree_index(I);\
3667
for (;;) {\
3668
if (chunksize(T) != S) {\
3669
tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3670
K <<= 1;\
3671
if (*C != 0)\
3672
T = *C;\
3673
else if (RTCHECK(ok_address(M, C))) {\
3674
*C = X;\
3675
X->parent = T;\
3676
X->fd = X->bk = X;\
3677
break;\
3678
}\
3679
else {\
3680
CORRUPTION_ERROR_ACTION(M);\
3681
break;\
3682
}\
3683
}\
3684
else {\
3685
tchunkptr F = T->fd;\
3686
if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3687
T->fd = F->bk = X;\
3688
X->fd = F;\
3689
X->bk = T;\
3690
X->parent = 0;\
3691
break;\
3692
}\
3693
else {\
3694
CORRUPTION_ERROR_ACTION(M);\
3695
break;\
3696
}\
3697
}\
3698
}\
3699
}\
3700
}
3701
3702
/*
3703
Unlink steps:
3704
3705
1. If x is a chained node, unlink it from its same-sized fd/bk links
3706
and choose its bk node as its replacement.
3707
2. If x was the last node of its size, but not a leaf node, it must
3708
be replaced with a leaf node (not merely one with an open left or
3709
right), to make sure that lefts and rights of descendents
3710
correspond properly to bit masks. We use the rightmost descendent
3711
of x. We could use any other leaf, but this is easy to locate and
3712
tends to counteract removal of leftmosts elsewhere, and so keeps
3713
paths shorter than minimally guaranteed. This doesn't loop much
3714
because on average a node in a tree is near the bottom.
3715
3. If x is the base of a chain (i.e., has parent links) relink
3716
x's parent and children to x's replacement (or null if none).
3717
*/
3718
3719
#define unlink_large_chunk(M, X) {\
3720
tchunkptr XP = X->parent;\
3721
tchunkptr R;\
3722
if (X->bk != X) {\
3723
tchunkptr F = X->fd;\
3724
R = X->bk;\
3725
if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3726
F->bk = R;\
3727
R->fd = F;\
3728
}\
3729
else {\
3730
CORRUPTION_ERROR_ACTION(M);\
3731
}\
3732
}\
3733
else {\
3734
tchunkptr* RP;\
3735
if (((R = *(RP = &(X->child[1]))) != 0) ||\
3736
((R = *(RP = &(X->child[0]))) != 0)) {\
3737
tchunkptr* CP;\
3738
while ((*(CP = &(R->child[1])) != 0) ||\
3739
(*(CP = &(R->child[0])) != 0)) {\
3740
R = *(RP = CP);\
3741
}\
3742
if (RTCHECK(ok_address(M, RP)))\
3743
*RP = 0;\
3744
else {\
3745
CORRUPTION_ERROR_ACTION(M);\
3746
}\
3747
}\
3748
}\
3749
if (XP != 0) {\
3750
tbinptr* H = treebin_at(M, X->index);\
3751
if (X == *H) {\
3752
if ((*H = R) == 0) \
3753
clear_treemap(M, X->index);\
3754
}\
3755
else if (RTCHECK(ok_address(M, XP))) {\
3756
if (XP->child[0] == X) \
3757
XP->child[0] = R;\
3758
else \
3759
XP->child[1] = R;\
3760
}\
3761
else\
3762
CORRUPTION_ERROR_ACTION(M);\
3763
if (R != 0) {\
3764
if (RTCHECK(ok_address(M, R))) {\
3765
tchunkptr C0, C1;\
3766
R->parent = XP;\
3767
if ((C0 = X->child[0]) != 0) {\
3768
if (RTCHECK(ok_address(M, C0))) {\
3769
R->child[0] = C0;\
3770
C0->parent = R;\
3771
}\
3772
else\
3773
CORRUPTION_ERROR_ACTION(M);\
3774
}\
3775
if ((C1 = X->child[1]) != 0) {\
3776
if (RTCHECK(ok_address(M, C1))) {\
3777
R->child[1] = C1;\
3778
C1->parent = R;\
3779
}\
3780
else\
3781
CORRUPTION_ERROR_ACTION(M);\
3782
}\
3783
}\
3784
else\
3785
CORRUPTION_ERROR_ACTION(M);\
3786
}\
3787
}\
3788
}
3789
3790
/* Relays to large vs small bin operations */
3791
3792
#define insert_chunk(M, P, S)\
3793
if (is_small(S)) insert_small_chunk(M, P, S)\
3794
else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3795
3796
#define unlink_chunk(M, P, S)\
3797
if (is_small(S)) unlink_small_chunk(M, P, S)\
3798
else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3799
3800
3801
/* Relays to internal calls to malloc/free from realloc, memalign etc */
3802
3803
#if ONLY_MSPACES
3804
#define internal_malloc(m, b) mspace_malloc(m, b)
3805
#define internal_free(m, mem) mspace_free(m,mem);
3806
#else /* ONLY_MSPACES */
3807
#if MSPACES
3808
#define internal_malloc(m, b)\
3809
((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3810
#define internal_free(m, mem)\
3811
if (m == gm) dlfree(mem); else mspace_free(m,mem);
3812
#else /* MSPACES */
3813
#define internal_malloc(m, b) dlmalloc(b)
3814
#define internal_free(m, mem) dlfree(mem)
3815
#endif /* MSPACES */
3816
#endif /* ONLY_MSPACES */
3817
3818
/* ----------------------- Direct-mmapping chunks ----------------------- */
3819
3820
/*
3821
Directly mmapped chunks are set up with an offset to the start of
3822
the mmapped region stored in the prev_foot field of the chunk. This
3823
allows reconstruction of the required argument to MUNMAP when freed,
3824
and also allows adjustment of the returned chunk to meet alignment
3825
requirements (especially in memalign).
3826
*/
3827
3828
/* Malloc using mmap */
3829
static void* mmap_alloc(mstate m, size_t nb) {
3830
size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3831
if (m->footprint_limit != 0) {
3832
size_t fp = m->footprint + mmsize;
3833
if (fp <= m->footprint || fp > m->footprint_limit)
3834
return 0;
3835
}
3836
if (mmsize > nb) { /* Check for wrap around 0 */
3837
char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3838
if (mm != CMFAIL) {
3839
size_t offset = align_offset(chunk2mem(mm));
3840
size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3841
mchunkptr p = (mchunkptr)(mm + offset);
3842
p->prev_foot = offset;
3843
p->head = psize;
3844
mark_inuse_foot(m, p, psize);
3845
chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3846
chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3847
3848
if (m->least_addr == 0 || mm < m->least_addr)
3849
m->least_addr = mm;
3850
if ((m->footprint += mmsize) > m->max_footprint)
3851
m->max_footprint = m->footprint;
3852
assert(is_aligned(chunk2mem(p)));
3853
check_mmapped_chunk(m, p);
3854
return chunk2mem(p);
3855
}
3856
}
3857
return 0;
3858
}
3859
3860
/* Realloc using mmap */
3861
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3862
size_t oldsize = chunksize(oldp);
3863
(void)flags; /* placate people compiling -Wunused */
3864
if (is_small(nb)) /* Can't shrink mmap regions below small size */
3865
return 0;
3866
/* Keep old chunk if big enough but not too big */
3867
if (oldsize >= nb + SIZE_T_SIZE &&
3868
(oldsize - nb) <= (mparams.granularity << 1))
3869
return oldp;
3870
else {
3871
size_t offset = oldp->prev_foot;
3872
size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3873
size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3874
char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3875
oldmmsize, newmmsize, flags);
3876
if (cp != CMFAIL) {
3877
mchunkptr newp = (mchunkptr)(cp + offset);
3878
size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3879
newp->head = psize;
3880
mark_inuse_foot(m, newp, psize);
3881
chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3882
chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3883
3884
if (cp < m->least_addr)
3885
m->least_addr = cp;
3886
if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3887
m->max_footprint = m->footprint;
3888
check_mmapped_chunk(m, newp);
3889
return newp;
3890
}
3891
}
3892
return 0;
3893
}
3894
3895
3896
/* -------------------------- mspace management -------------------------- */
3897
3898
/* Initialize top chunk and its size */
3899
static void init_top(mstate m, mchunkptr p, size_t psize) {
3900
/* Ensure alignment */
3901
size_t offset = align_offset(chunk2mem(p));
3902
p = (mchunkptr)((char*)p + offset);
3903
psize -= offset;
3904
3905
m->top = p;
3906
m->topsize = psize;
3907
p->head = psize | PINUSE_BIT;
3908
/* set size of fake trailing chunk holding overhead space only once */
3909
chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3910
m->trim_check = mparams.trim_threshold; /* reset on each update */
3911
}
3912
3913
/* Initialize bins for a new mstate that is otherwise zeroed out */
3914
static void init_bins(mstate m) {
3915
/* Establish circular links for smallbins */
3916
bindex_t i;
3917
for (i = 0; i < NSMALLBINS; ++i) {
3918
sbinptr bin = smallbin_at(m,i);
3919
bin->fd = bin->bk = bin;
3920
}
3921
}
3922
3923
#if PROCEED_ON_ERROR
3924
3925
/* default corruption action */
3926
static void reset_on_error(mstate m) {
3927
int i;
3928
++malloc_corruption_error_count;
3929
/* Reinitialize fields to forget about all memory */
3930
m->smallmap = m->treemap = 0;
3931
m->dvsize = m->topsize = 0;
3932
m->seg.base = 0;
3933
m->seg.size = 0;
3934
m->seg.next = 0;
3935
m->top = m->dv = 0;
3936
for (i = 0; i < NTREEBINS; ++i)
3937
*treebin_at(m, i) = 0;
3938
init_bins(m);
3939
}
3940
#endif /* PROCEED_ON_ERROR */
3941
3942
/* Allocate chunk and prepend remainder with chunk in successor base. */
3943
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3944
size_t nb) {
3945
mchunkptr p = align_as_chunk(newbase);
3946
mchunkptr oldfirst = align_as_chunk(oldbase);
3947
size_t psize = (char*)oldfirst - (char*)p;
3948
mchunkptr q = chunk_plus_offset(p, nb);
3949
size_t qsize = psize - nb;
3950
set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3951
3952
assert((char*)oldfirst > (char*)q);
3953
assert(pinuse(oldfirst));
3954
assert(qsize >= MIN_CHUNK_SIZE);
3955
3956
/* consolidate remainder with first chunk of old base */
3957
if (oldfirst == m->top) {
3958
size_t tsize = m->topsize += qsize;
3959
m->top = q;
3960
q->head = tsize | PINUSE_BIT;
3961
check_top_chunk(m, q);
3962
}
3963
else if (oldfirst == m->dv) {
3964
size_t dsize = m->dvsize += qsize;
3965
m->dv = q;
3966
set_size_and_pinuse_of_free_chunk(q, dsize);
3967
}
3968
else {
3969
if (!is_inuse(oldfirst)) {
3970
size_t nsize = chunksize(oldfirst);
3971
unlink_chunk(m, oldfirst, nsize);
3972
oldfirst = chunk_plus_offset(oldfirst, nsize);
3973
qsize += nsize;
3974
}
3975
set_free_with_pinuse(q, qsize, oldfirst);
3976
insert_chunk(m, q, qsize);
3977
check_free_chunk(m, q);
3978
}
3979
3980
check_malloced_chunk(m, chunk2mem(p), nb);
3981
return chunk2mem(p);
3982
}
3983
3984
/* Add a segment to hold a new noncontiguous region */
3985
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3986
/* Determine locations and sizes of segment, fenceposts, old top */
3987
char* old_top = (char*)m->top;
3988
msegmentptr oldsp = segment_holding(m, old_top);
3989
char* old_end = oldsp->base + oldsp->size;
3990
size_t ssize = pad_request(sizeof(struct malloc_segment));
3991
char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3992
size_t offset = align_offset(chunk2mem(rawsp));
3993
char* asp = rawsp + offset;
3994
char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3995
mchunkptr sp = (mchunkptr)csp;
3996
msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3997
mchunkptr tnext = chunk_plus_offset(sp, ssize);
3998
mchunkptr p = tnext;
3999
int nfences = 0;
4000
4001
/* reset top to new space */
4002
init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4003
4004
/* Set up segment record */
4005
assert(is_aligned(ss));
4006
set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4007
*ss = m->seg; /* Push current record */
4008
m->seg.base = tbase;
4009
m->seg.size = tsize;
4010
m->seg.sflags = mmapped;
4011
m->seg.next = ss;
4012
4013
/* Insert trailing fenceposts */
4014
for (;;) {
4015
mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4016
p->head = FENCEPOST_HEAD;
4017
++nfences;
4018
if ((char*)(&(nextp->head)) < old_end)
4019
p = nextp;
4020
else
4021
break;
4022
}
4023
assert(nfences >= 2);
4024
4025
/* Insert the rest of old top into a bin as an ordinary free chunk */
4026
if (csp != old_top) {
4027
mchunkptr q = (mchunkptr)old_top;
4028
size_t psize = csp - old_top;
4029
mchunkptr tn = chunk_plus_offset(q, psize);
4030
set_free_with_pinuse(q, psize, tn);
4031
insert_chunk(m, q, psize);
4032
}
4033
4034
check_top_chunk(m, m->top);
4035
}
4036
4037
/* -------------------------- System allocation -------------------------- */
4038
4039
/* Get memory from system using MORECORE or MMAP */
4040
static void* sys_alloc(mstate m, size_t nb) {
4041
char* tbase = CMFAIL;
4042
size_t tsize = 0;
4043
flag_t mmap_flag = 0;
4044
size_t asize; /* allocation size */
4045
4046
ensure_initialization();
4047
4048
/* Directly map large chunks, but only if already initialized */
4049
if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4050
void* mem = mmap_alloc(m, nb);
4051
if (mem != 0)
4052
return mem;
4053
}
4054
4055
asize = granularity_align(nb + SYS_ALLOC_PADDING);
4056
if (asize <= nb)
4057
return 0; /* wraparound */
4058
if (m->footprint_limit != 0) {
4059
size_t fp = m->footprint + asize;
4060
if (fp <= m->footprint || fp > m->footprint_limit)
4061
return 0;
4062
}
4063
4064
/*
4065
Try getting memory in any of three ways (in most-preferred to
4066
least-preferred order):
4067
1. A call to MORECORE that can normally contiguously extend memory.
4068
(disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4069
or main space is mmapped or a previous contiguous call failed)
4070
2. A call to MMAP new space (disabled if not HAVE_MMAP).
4071
Note that under the default settings, if MORECORE is unable to
4072
fulfill a request, and HAVE_MMAP is true, then mmap is
4073
used as a noncontiguous system allocator. This is a useful backup
4074
strategy for systems with holes in address spaces -- in this case
4075
sbrk cannot contiguously expand the heap, but mmap may be able to
4076
find space.
4077
3. A call to MORECORE that cannot usually contiguously extend memory.
4078
(disabled if not HAVE_MORECORE)
4079
4080
In all cases, we need to request enough bytes from system to ensure
4081
we can malloc nb bytes upon success, so pad with enough space for
4082
top_foot, plus alignment-pad to make sure we don't lose bytes if
4083
not on boundary, and round this up to a granularity unit.
4084
*/
4085
4086
if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4087
char* br = CMFAIL;
4088
size_t ssize = asize; /* sbrk call size */
4089
msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4090
ACQUIRE_MALLOC_GLOBAL_LOCK();
4091
4092
if (ss == 0) { /* First time through or recovery */
4093
char* base = (char*)CALL_MORECORE(0);
4094
if (base != CMFAIL) {
4095
size_t fp;
4096
/* Adjust to end on a page boundary */
4097
if (!is_page_aligned(base))
4098
ssize += (page_align((size_t)base) - (size_t)base);
4099
fp = m->footprint + ssize; /* recheck limits */
4100
if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4101
(m->footprint_limit == 0 ||
4102
(fp > m->footprint && fp <= m->footprint_limit)) &&
4103
(br = (char*)(CALL_MORECORE(ssize))) == base) {
4104
tbase = base;
4105
tsize = ssize;
4106
}
4107
}
4108
}
4109
else {
4110
/* Subtract out existing available top space from MORECORE request. */
4111
ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4112
/* Use mem here only if it did continuously extend old space */
4113
if (ssize < HALF_MAX_SIZE_T &&
4114
(br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4115
tbase = br;
4116
tsize = ssize;
4117
}
4118
}
4119
4120
if (tbase == CMFAIL) { /* Cope with partial failure */
4121
if (br != CMFAIL) { /* Try to use/extend the space we did get */
4122
if (ssize < HALF_MAX_SIZE_T &&
4123
ssize < nb + SYS_ALLOC_PADDING) {
4124
size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4125
if (esize < HALF_MAX_SIZE_T) {
4126
char* end = (char*)CALL_MORECORE(esize);
4127
if (end != CMFAIL)
4128
ssize += esize;
4129
else { /* Can't use; try to release */
4130
(void) CALL_MORECORE(-ssize);
4131
br = CMFAIL;
4132
}
4133
}
4134
}
4135
}
4136
if (br != CMFAIL) { /* Use the space we did get */
4137
tbase = br;
4138
tsize = ssize;
4139
}
4140
else
4141
disable_contiguous(m); /* Don't try contiguous path in the future */
4142
}
4143
4144
RELEASE_MALLOC_GLOBAL_LOCK();
4145
}
4146
4147
if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
4148
char* mp = (char*)(CALL_MMAP(asize));
4149
if (mp != CMFAIL) {
4150
tbase = mp;
4151
tsize = asize;
4152
mmap_flag = USE_MMAP_BIT;
4153
}
4154
}
4155
4156
if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4157
if (asize < HALF_MAX_SIZE_T) {
4158
char* br = CMFAIL;
4159
char* end = CMFAIL;
4160
ACQUIRE_MALLOC_GLOBAL_LOCK();
4161
br = (char*)(CALL_MORECORE(asize));
4162
end = (char*)(CALL_MORECORE(0));
4163
RELEASE_MALLOC_GLOBAL_LOCK();
4164
if (br != CMFAIL && end != CMFAIL && br < end) {
4165
size_t ssize = end - br;
4166
if (ssize > nb + TOP_FOOT_SIZE) {
4167
tbase = br;
4168
tsize = ssize;
4169
}
4170
}
4171
}
4172
}
4173
4174
if (tbase != CMFAIL) {
4175
4176
if ((m->footprint += tsize) > m->max_footprint)
4177
m->max_footprint = m->footprint;
4178
4179
if (!is_initialized(m)) { /* first-time initialization */
4180
if (m->least_addr == 0 || tbase < m->least_addr)
4181
m->least_addr = tbase;
4182
m->seg.base = tbase;
4183
m->seg.size = tsize;
4184
m->seg.sflags = mmap_flag;
4185
m->magic = mparams.magic;
4186
m->release_checks = MAX_RELEASE_CHECK_RATE;
4187
init_bins(m);
4188
#if !ONLY_MSPACES
4189
if (is_global(m))
4190
init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4191
else
4192
#endif
4193
{
4194
/* Offset top by embedded malloc_state */
4195
mchunkptr mn = next_chunk(mem2chunk(m));
4196
init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4197
}
4198
}
4199
4200
else {
4201
/* Try to merge with an existing segment */
4202
msegmentptr sp = &m->seg;
4203
/* Only consider most recent segment if traversal suppressed */
4204
while (sp != 0 && tbase != sp->base + sp->size)
4205
sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4206
if (sp != 0 &&
4207
!is_extern_segment(sp) &&
4208
(sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4209
segment_holds(sp, m->top)) { /* append */
4210
sp->size += tsize;
4211
init_top(m, m->top, m->topsize + tsize);
4212
}
4213
else {
4214
if (tbase < m->least_addr)
4215
m->least_addr = tbase;
4216
sp = &m->seg;
4217
while (sp != 0 && sp->base != tbase + tsize)
4218
sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4219
if (sp != 0 &&
4220
!is_extern_segment(sp) &&
4221
(sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4222
char* oldbase = sp->base;
4223
sp->base = tbase;
4224
sp->size += tsize;
4225
return prepend_alloc(m, tbase, oldbase, nb);
4226
}
4227
else
4228
add_segment(m, tbase, tsize, mmap_flag);
4229
}
4230
}
4231
4232
if (nb < m->topsize) { /* Allocate from new or extended top space */
4233
size_t rsize = m->topsize -= nb;
4234
mchunkptr p = m->top;
4235
mchunkptr r = m->top = chunk_plus_offset(p, nb);
4236
r->head = rsize | PINUSE_BIT;
4237
set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4238
check_top_chunk(m, m->top);
4239
check_malloced_chunk(m, chunk2mem(p), nb);
4240
return chunk2mem(p);
4241
}
4242
}
4243
4244
MALLOC_FAILURE_ACTION;
4245
return 0;
4246
}
4247
4248
/* ----------------------- system deallocation -------------------------- */
4249
4250
/* Unmap and unlink any mmapped segments that don't contain used chunks */
4251
static size_t release_unused_segments(mstate m) {
4252
size_t released = 0;
4253
int nsegs = 0;
4254
msegmentptr pred = &m->seg;
4255
msegmentptr sp = pred->next;
4256
while (sp != 0) {
4257
char* base = sp->base;
4258
size_t size = sp->size;
4259
msegmentptr next = sp->next;
4260
++nsegs;
4261
if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4262
mchunkptr p = align_as_chunk(base);
4263
size_t psize = chunksize(p);
4264
/* Can unmap if first chunk holds entire segment and not pinned */
4265
if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4266
tchunkptr tp = (tchunkptr)p;
4267
assert(segment_holds(sp, (char*)sp));
4268
if (p == m->dv) {
4269
m->dv = 0;
4270
m->dvsize = 0;
4271
}
4272
else {
4273
unlink_large_chunk(m, tp);
4274
}
4275
if (CALL_MUNMAP(base, size) == 0) {
4276
released += size;
4277
m->footprint -= size;
4278
/* unlink obsoleted record */
4279
sp = pred;
4280
sp->next = next;
4281
}
4282
else { /* back out if cannot unmap */
4283
insert_large_chunk(m, tp, psize);
4284
}
4285
}
4286
}
4287
if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4288
break;
4289
pred = sp;
4290
sp = next;
4291
}
4292
/* Reset check counter */
4293
m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4294
(size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4295
return released;
4296
}
4297
4298
static int sys_trim(mstate m, size_t pad) {
4299
size_t released = 0;
4300
ensure_initialization();
4301
if (pad < MAX_REQUEST && is_initialized(m)) {
4302
pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4303
4304
if (m->topsize > pad) {
4305
/* Shrink top space in granularity-size units, keeping at least one */
4306
size_t unit = mparams.granularity;
4307
size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4308
SIZE_T_ONE) * unit;
4309
msegmentptr sp = segment_holding(m, (char*)m->top);
4310
4311
if (!is_extern_segment(sp)) {
4312
if (is_mmapped_segment(sp)) {
4313
if (HAVE_MMAP &&
4314
sp->size >= extra &&
4315
!has_segment_link(m, sp)) { /* can't shrink if pinned */
4316
size_t newsize = sp->size - extra;
4317
(void)newsize; /* placate people compiling -Wunused-variable */
4318
/* Prefer mremap, fall back to munmap */
4319
if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4320
(CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4321
released = extra;
4322
}
4323
}
4324
}
4325
else if (HAVE_MORECORE) {
4326
if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4327
extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4328
ACQUIRE_MALLOC_GLOBAL_LOCK();
4329
{
4330
/* Make sure end of memory is where we last set it. */
4331
char* old_br = (char*)(CALL_MORECORE(0));
4332
if (old_br == sp->base + sp->size) {
4333
char* rel_br = (char*)(CALL_MORECORE(-extra));
4334
char* new_br = (char*)(CALL_MORECORE(0));
4335
if (rel_br != CMFAIL && new_br < old_br)
4336
released = old_br - new_br;
4337
}
4338
}
4339
RELEASE_MALLOC_GLOBAL_LOCK();
4340
}
4341
}
4342
4343
if (released != 0) {
4344
sp->size -= released;
4345
m->footprint -= released;
4346
init_top(m, m->top, m->topsize - released);
4347
check_top_chunk(m, m->top);
4348
}
4349
}
4350
4351
/* Unmap any unused mmapped segments */
4352
if (HAVE_MMAP)
4353
released += release_unused_segments(m);
4354
4355
/* On failure, disable autotrim to avoid repeated failed future calls */
4356
if (released == 0 && m->topsize > m->trim_check)
4357
m->trim_check = MAX_SIZE_T;
4358
}
4359
4360
return (released != 0)? 1 : 0;
4361
}
4362
4363
/* Consolidate and bin a chunk. Differs from exported versions
4364
of free mainly in that the chunk need not be marked as inuse.
4365
*/
4366
static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4367
mchunkptr next = chunk_plus_offset(p, psize);
4368
if (!pinuse(p)) {
4369
mchunkptr prev;
4370
size_t prevsize = p->prev_foot;
4371
if (is_mmapped(p)) {
4372
psize += prevsize + MMAP_FOOT_PAD;
4373
if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4374
m->footprint -= psize;
4375
return;
4376
}
4377
prev = chunk_minus_offset(p, prevsize);
4378
psize += prevsize;
4379
p = prev;
4380
if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4381
if (p != m->dv) {
4382
unlink_chunk(m, p, prevsize);
4383
}
4384
else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4385
m->dvsize = psize;
4386
set_free_with_pinuse(p, psize, next);
4387
return;
4388
}
4389
}
4390
else {
4391
CORRUPTION_ERROR_ACTION(m);
4392
return;
4393
}
4394
}
4395
if (RTCHECK(ok_address(m, next))) {
4396
if (!cinuse(next)) { /* consolidate forward */
4397
if (next == m->top) {
4398
size_t tsize = m->topsize += psize;
4399
m->top = p;
4400
p->head = tsize | PINUSE_BIT;
4401
if (p == m->dv) {
4402
m->dv = 0;
4403
m->dvsize = 0;
4404
}
4405
return;
4406
}
4407
else if (next == m->dv) {
4408
size_t dsize = m->dvsize += psize;
4409
m->dv = p;
4410
set_size_and_pinuse_of_free_chunk(p, dsize);
4411
return;
4412
}
4413
else {
4414
size_t nsize = chunksize(next);
4415
psize += nsize;
4416
unlink_chunk(m, next, nsize);
4417
set_size_and_pinuse_of_free_chunk(p, psize);
4418
if (p == m->dv) {
4419
m->dvsize = psize;
4420
return;
4421
}
4422
}
4423
}
4424
else {
4425
set_free_with_pinuse(p, psize, next);
4426
}
4427
insert_chunk(m, p, psize);
4428
}
4429
else {
4430
CORRUPTION_ERROR_ACTION(m);
4431
}
4432
}
4433
4434
/* ---------------------------- malloc --------------------------- */
4435
4436
/* allocate a large request from the best fitting chunk in a treebin */
4437
static void* tmalloc_large(mstate m, size_t nb) {
4438
tchunkptr v = 0;
4439
size_t rsize = -nb; /* Unsigned negation */
4440
tchunkptr t;
4441
bindex_t idx;
4442
compute_tree_index(nb, idx);
4443
if ((t = *treebin_at(m, idx)) != 0) {
4444
/* Traverse tree for this bin looking for node with size == nb */
4445
size_t sizebits = nb << leftshift_for_tree_index(idx);
4446
tchunkptr rst = 0; /* The deepest untaken right subtree */
4447
for (;;) {
4448
tchunkptr rt;
4449
size_t trem = chunksize(t) - nb;
4450
if (trem < rsize) {
4451
v = t;
4452
if ((rsize = trem) == 0)
4453
break;
4454
}
4455
rt = t->child[1];
4456
t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4457
if (rt != 0 && rt != t)
4458
rst = rt;
4459
if (t == 0) {
4460
t = rst; /* set t to least subtree holding sizes > nb */
4461
break;
4462
}
4463
sizebits <<= 1;
4464
}
4465
}
4466
if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4467
binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4468
if (leftbits != 0) {
4469
bindex_t i;
4470
binmap_t leastbit = least_bit(leftbits);
4471
compute_bit2idx(leastbit, i);
4472
t = *treebin_at(m, i);
4473
}
4474
}
4475
4476
while (t != 0) { /* find smallest of tree or subtree */
4477
size_t trem = chunksize(t) - nb;
4478
if (trem < rsize) {
4479
rsize = trem;
4480
v = t;
4481
}
4482
t = leftmost_child(t);
4483
}
4484
4485
/* If dv is a better fit, return 0 so malloc will use it */
4486
if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4487
if (RTCHECK(ok_address(m, v))) { /* split */
4488
mchunkptr r = chunk_plus_offset(v, nb);
4489
assert(chunksize(v) == rsize + nb);
4490
if (RTCHECK(ok_next(v, r))) {
4491
unlink_large_chunk(m, v);
4492
if (rsize < MIN_CHUNK_SIZE)
4493
set_inuse_and_pinuse(m, v, (rsize + nb));
4494
else {
4495
set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4496
set_size_and_pinuse_of_free_chunk(r, rsize);
4497
insert_chunk(m, r, rsize);
4498
}
4499
return chunk2mem(v);
4500
}
4501
}
4502
CORRUPTION_ERROR_ACTION(m);
4503
}
4504
return 0;
4505
}
4506
4507
/* allocate a small request from the best fitting chunk in a treebin */
4508
static void* tmalloc_small(mstate m, size_t nb) {
4509
tchunkptr t, v;
4510
size_t rsize;
4511
bindex_t i;
4512
binmap_t leastbit = least_bit(m->treemap);
4513
compute_bit2idx(leastbit, i);
4514
v = t = *treebin_at(m, i);
4515
rsize = chunksize(t) - nb;
4516
4517
while ((t = leftmost_child(t)) != 0) {
4518
size_t trem = chunksize(t) - nb;
4519
if (trem < rsize) {
4520
rsize = trem;
4521
v = t;
4522
}
4523
}
4524
4525
if (RTCHECK(ok_address(m, v))) {
4526
mchunkptr r = chunk_plus_offset(v, nb);
4527
assert(chunksize(v) == rsize + nb);
4528
if (RTCHECK(ok_next(v, r))) {
4529
unlink_large_chunk(m, v);
4530
if (rsize < MIN_CHUNK_SIZE)
4531
set_inuse_and_pinuse(m, v, (rsize + nb));
4532
else {
4533
set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4534
set_size_and_pinuse_of_free_chunk(r, rsize);
4535
replace_dv(m, r, rsize);
4536
}
4537
return chunk2mem(v);
4538
}
4539
}
4540
4541
CORRUPTION_ERROR_ACTION(m);
4542
return 0;
4543
}
4544
4545
#if !ONLY_MSPACES
4546
4547
void* dlmalloc(size_t bytes) {
4548
/*
4549
Basic algorithm:
4550
If a small request (< 256 bytes minus per-chunk overhead):
4551
1. If one exists, use a remainderless chunk in associated smallbin.
4552
(Remainderless means that there are too few excess bytes to
4553
represent as a chunk.)
4554
2. If it is big enough, use the dv chunk, which is normally the
4555
chunk adjacent to the one used for the most recent small request.
4556
3. If one exists, split the smallest available chunk in a bin,
4557
saving remainder in dv.
4558
4. If it is big enough, use the top chunk.
4559
5. If available, get memory from system and use it
4560
Otherwise, for a large request:
4561
1. Find the smallest available binned chunk that fits, and use it
4562
if it is better fitting than dv chunk, splitting if necessary.
4563
2. If better fitting than any binned chunk, use the dv chunk.
4564
3. If it is big enough, use the top chunk.
4565
4. If request size >= mmap threshold, try to directly mmap this chunk.
4566
5. If available, get memory from system and use it
4567
4568
The ugly goto's here ensure that postaction occurs along all paths.
4569
*/
4570
4571
#if USE_LOCKS
4572
ensure_initialization(); /* initialize in sys_alloc if not using locks */
4573
#endif
4574
4575
if (!PREACTION(gm)) {
4576
void* mem;
4577
size_t nb;
4578
if (bytes <= MAX_SMALL_REQUEST) {
4579
bindex_t idx;
4580
binmap_t smallbits;
4581
nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4582
idx = small_index(nb);
4583
smallbits = gm->smallmap >> idx;
4584
4585
if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4586
mchunkptr b, p;
4587
idx += ~smallbits & 1; /* Uses next bin if idx empty */
4588
b = smallbin_at(gm, idx);
4589
p = b->fd;
4590
assert(chunksize(p) == small_index2size(idx));
4591
unlink_first_small_chunk(gm, b, p, idx);
4592
set_inuse_and_pinuse(gm, p, small_index2size(idx));
4593
mem = chunk2mem(p);
4594
check_malloced_chunk(gm, mem, nb);
4595
goto postaction;
4596
}
4597
4598
else if (nb > gm->dvsize) {
4599
if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4600
mchunkptr b, p, r;
4601
size_t rsize;
4602
bindex_t i;
4603
binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4604
binmap_t leastbit = least_bit(leftbits);
4605
compute_bit2idx(leastbit, i);
4606
b = smallbin_at(gm, i);
4607
p = b->fd;
4608
assert(chunksize(p) == small_index2size(i));
4609
unlink_first_small_chunk(gm, b, p, i);
4610
rsize = small_index2size(i) - nb;
4611
/* Fit here cannot be remainderless if 4byte sizes */
4612
if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4613
set_inuse_and_pinuse(gm, p, small_index2size(i));
4614
else {
4615
set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4616
r = chunk_plus_offset(p, nb);
4617
set_size_and_pinuse_of_free_chunk(r, rsize);
4618
replace_dv(gm, r, rsize);
4619
}
4620
mem = chunk2mem(p);
4621
check_malloced_chunk(gm, mem, nb);
4622
goto postaction;
4623
}
4624
4625
else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4626
check_malloced_chunk(gm, mem, nb);
4627
goto postaction;
4628
}
4629
}
4630
}
4631
else if (bytes >= MAX_REQUEST)
4632
nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4633
else {
4634
nb = pad_request(bytes);
4635
if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4636
check_malloced_chunk(gm, mem, nb);
4637
goto postaction;
4638
}
4639
}
4640
4641
if (nb <= gm->dvsize) {
4642
size_t rsize = gm->dvsize - nb;
4643
mchunkptr p = gm->dv;
4644
if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4645
mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4646
gm->dvsize = rsize;
4647
set_size_and_pinuse_of_free_chunk(r, rsize);
4648
set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4649
}
4650
else { /* exhaust dv */
4651
size_t dvs = gm->dvsize;
4652
gm->dvsize = 0;
4653
gm->dv = 0;
4654
set_inuse_and_pinuse(gm, p, dvs);
4655
}
4656
mem = chunk2mem(p);
4657
check_malloced_chunk(gm, mem, nb);
4658
goto postaction;
4659
}
4660
4661
else if (nb < gm->topsize) { /* Split top */
4662
size_t rsize = gm->topsize -= nb;
4663
mchunkptr p = gm->top;
4664
mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4665
r->head = rsize | PINUSE_BIT;
4666
set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4667
mem = chunk2mem(p);
4668
check_top_chunk(gm, gm->top);
4669
check_malloced_chunk(gm, mem, nb);
4670
goto postaction;
4671
}
4672
4673
mem = sys_alloc(gm, nb);
4674
4675
postaction:
4676
POSTACTION(gm);
4677
return mem;
4678
}
4679
4680
return 0;
4681
}
4682
4683
/* ---------------------------- free --------------------------- */
4684
4685
void dlfree(void* mem) {
4686
/*
4687
Consolidate freed chunks with preceeding or succeeding bordering
4688
free chunks, if they exist, and then place in a bin. Intermixed
4689
with special cases for top, dv, mmapped chunks, and usage errors.
4690
*/
4691
4692
if (mem != 0) {
4693
mchunkptr p = mem2chunk(mem);
4694
#if FOOTERS
4695
mstate fm = get_mstate_for(p);
4696
if (!ok_magic(fm)) {
4697
USAGE_ERROR_ACTION(fm, p);
4698
return;
4699
}
4700
#else /* FOOTERS */
4701
#define fm gm
4702
#endif /* FOOTERS */
4703
if (!PREACTION(fm)) {
4704
check_inuse_chunk(fm, p);
4705
if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4706
size_t psize = chunksize(p);
4707
mchunkptr next = chunk_plus_offset(p, psize);
4708
if (!pinuse(p)) {
4709
size_t prevsize = p->prev_foot;
4710
if (is_mmapped(p)) {
4711
psize += prevsize + MMAP_FOOT_PAD;
4712
if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4713
fm->footprint -= psize;
4714
goto postaction;
4715
}
4716
else {
4717
mchunkptr prev = chunk_minus_offset(p, prevsize);
4718
psize += prevsize;
4719
p = prev;
4720
if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4721
if (p != fm->dv) {
4722
unlink_chunk(fm, p, prevsize);
4723
}
4724
else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4725
fm->dvsize = psize;
4726
set_free_with_pinuse(p, psize, next);
4727
goto postaction;
4728
}
4729
}
4730
else
4731
goto erroraction;
4732
}
4733
}
4734
4735
if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4736
if (!cinuse(next)) { /* consolidate forward */
4737
if (next == fm->top) {
4738
size_t tsize = fm->topsize += psize;
4739
fm->top = p;
4740
p->head = tsize | PINUSE_BIT;
4741
if (p == fm->dv) {
4742
fm->dv = 0;
4743
fm->dvsize = 0;
4744
}
4745
if (should_trim(fm, tsize))
4746
sys_trim(fm, 0);
4747
goto postaction;
4748
}
4749
else if (next == fm->dv) {
4750
size_t dsize = fm->dvsize += psize;
4751
fm->dv = p;
4752
set_size_and_pinuse_of_free_chunk(p, dsize);
4753
goto postaction;
4754
}
4755
else {
4756
size_t nsize = chunksize(next);
4757
psize += nsize;
4758
unlink_chunk(fm, next, nsize);
4759
set_size_and_pinuse_of_free_chunk(p, psize);
4760
if (p == fm->dv) {
4761
fm->dvsize = psize;
4762
goto postaction;
4763
}
4764
}
4765
}
4766
else
4767
set_free_with_pinuse(p, psize, next);
4768
4769
if (is_small(psize)) {
4770
insert_small_chunk(fm, p, psize);
4771
check_free_chunk(fm, p);
4772
}
4773
else {
4774
tchunkptr tp = (tchunkptr)p;
4775
insert_large_chunk(fm, tp, psize);
4776
check_free_chunk(fm, p);
4777
if (--fm->release_checks == 0)
4778
release_unused_segments(fm);
4779
}
4780
goto postaction;
4781
}
4782
}
4783
erroraction:
4784
USAGE_ERROR_ACTION(fm, p);
4785
postaction:
4786
POSTACTION(fm);
4787
}
4788
}
4789
#if !FOOTERS
4790
#undef fm
4791
#endif /* FOOTERS */
4792
}
4793
4794
void* dlcalloc(size_t n_elements, size_t elem_size) {
4795
void* mem;
4796
size_t req = 0;
4797
if (n_elements != 0) {
4798
req = n_elements * elem_size;
4799
if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4800
(req / n_elements != elem_size))
4801
req = MAX_SIZE_T; /* force downstream failure on overflow */
4802
}
4803
mem = dlmalloc(req);
4804
if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4805
memset(mem, 0, req);
4806
return mem;
4807
}
4808
4809
#endif /* !ONLY_MSPACES */
4810
4811
/* ------------ Internal support for realloc, memalign, etc -------------- */
4812
4813
/* Try to realloc; only in-place unless can_move true */
4814
static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4815
int can_move) {
4816
mchunkptr newp = 0;
4817
size_t oldsize = chunksize(p);
4818
mchunkptr next = chunk_plus_offset(p, oldsize);
4819
if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4820
ok_next(p, next) && ok_pinuse(next))) {
4821
if (is_mmapped(p)) {
4822
newp = mmap_resize(m, p, nb, can_move);
4823
}
4824
else if (oldsize >= nb) { /* already big enough */
4825
size_t rsize = oldsize - nb;
4826
if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
4827
mchunkptr r = chunk_plus_offset(p, nb);
4828
set_inuse(m, p, nb);
4829
set_inuse(m, r, rsize);
4830
dispose_chunk(m, r, rsize);
4831
}
4832
newp = p;
4833
}
4834
else if (next == m->top) { /* extend into top */
4835
if (oldsize + m->topsize > nb) {
4836
size_t newsize = oldsize + m->topsize;
4837
size_t newtopsize = newsize - nb;
4838
mchunkptr newtop = chunk_plus_offset(p, nb);
4839
set_inuse(m, p, nb);
4840
newtop->head = newtopsize |PINUSE_BIT;
4841
m->top = newtop;
4842
m->topsize = newtopsize;
4843
newp = p;
4844
}
4845
}
4846
else if (next == m->dv) { /* extend into dv */
4847
size_t dvs = m->dvsize;
4848
if (oldsize + dvs >= nb) {
4849
size_t dsize = oldsize + dvs - nb;
4850
if (dsize >= MIN_CHUNK_SIZE) {
4851
mchunkptr r = chunk_plus_offset(p, nb);
4852
mchunkptr n = chunk_plus_offset(r, dsize);
4853
set_inuse(m, p, nb);
4854
set_size_and_pinuse_of_free_chunk(r, dsize);
4855
clear_pinuse(n);
4856
m->dvsize = dsize;
4857
m->dv = r;
4858
}
4859
else { /* exhaust dv */
4860
size_t newsize = oldsize + dvs;
4861
set_inuse(m, p, newsize);
4862
m->dvsize = 0;
4863
m->dv = 0;
4864
}
4865
newp = p;
4866
}
4867
}
4868
else if (!cinuse(next)) { /* extend into next free chunk */
4869
size_t nextsize = chunksize(next);
4870
if (oldsize + nextsize >= nb) {
4871
size_t rsize = oldsize + nextsize - nb;
4872
unlink_chunk(m, next, nextsize);
4873
if (rsize < MIN_CHUNK_SIZE) {
4874
size_t newsize = oldsize + nextsize;
4875
set_inuse(m, p, newsize);
4876
}
4877
else {
4878
mchunkptr r = chunk_plus_offset(p, nb);
4879
set_inuse(m, p, nb);
4880
set_inuse(m, r, rsize);
4881
dispose_chunk(m, r, rsize);
4882
}
4883
newp = p;
4884
}
4885
}
4886
}
4887
else {
4888
USAGE_ERROR_ACTION(m, chunk2mem(p));
4889
}
4890
return newp;
4891
}
4892
4893
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4894
void* mem = 0;
4895
if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4896
alignment = MIN_CHUNK_SIZE;
4897
if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4898
size_t a = MALLOC_ALIGNMENT << 1;
4899
while (a < alignment) a <<= 1;
4900
alignment = a;
4901
}
4902
if (bytes >= MAX_REQUEST - alignment) {
4903
if (m != 0) { /* Test isn't needed but avoids compiler warning */
4904
MALLOC_FAILURE_ACTION;
4905
}
4906
}
4907
else {
4908
size_t nb = request2size(bytes);
4909
size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4910
mem = internal_malloc(m, req);
4911
if (mem != 0) {
4912
mchunkptr p = mem2chunk(mem);
4913
if (PREACTION(m))
4914
return 0;
4915
if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4916
/*
4917
Find an aligned spot inside chunk. Since we need to give
4918
back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4919
the first calculation places us at a spot with less than
4920
MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4921
We've allocated enough total room so that this is always
4922
possible.
4923
*/
4924
char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4925
SIZE_T_ONE)) &
4926
-alignment));
4927
char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4928
br : br+alignment;
4929
mchunkptr newp = (mchunkptr)pos;
4930
size_t leadsize = pos - (char*)(p);
4931
size_t newsize = chunksize(p) - leadsize;
4932
4933
if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4934
newp->prev_foot = p->prev_foot + leadsize;
4935
newp->head = newsize;
4936
}
4937
else { /* Otherwise, give back leader, use the rest */
4938
set_inuse(m, newp, newsize);
4939
set_inuse(m, p, leadsize);
4940
dispose_chunk(m, p, leadsize);
4941
}
4942
p = newp;
4943
}
4944
4945
/* Give back spare room at the end */
4946
if (!is_mmapped(p)) {
4947
size_t size = chunksize(p);
4948
if (size > nb + MIN_CHUNK_SIZE) {
4949
size_t remainder_size = size - nb;
4950
mchunkptr remainder = chunk_plus_offset(p, nb);
4951
set_inuse(m, p, nb);
4952
set_inuse(m, remainder, remainder_size);
4953
dispose_chunk(m, remainder, remainder_size);
4954
}
4955
}
4956
4957
mem = chunk2mem(p);
4958
assert (chunksize(p) >= nb);
4959
assert(((size_t)mem & (alignment - 1)) == 0);
4960
check_inuse_chunk(m, p);
4961
POSTACTION(m);
4962
}
4963
}
4964
return mem;
4965
}
4966
4967
/*
4968
Common support for independent_X routines, handling
4969
all of the combinations that can result.
4970
The opts arg has:
4971
bit 0 set if all elements are same size (using sizes[0])
4972
bit 1 set if elements should be zeroed
4973
*/
4974
static void** ialloc(mstate m,
4975
size_t n_elements,
4976
size_t* sizes,
4977
int opts,
4978
void* chunks[]) {
4979
4980
size_t element_size; /* chunksize of each element, if all same */
4981
size_t contents_size; /* total size of elements */
4982
size_t array_size; /* request size of pointer array */
4983
void* mem; /* malloced aggregate space */
4984
mchunkptr p; /* corresponding chunk */
4985
size_t remainder_size; /* remaining bytes while splitting */
4986
void** marray; /* either "chunks" or malloced ptr array */
4987
mchunkptr array_chunk; /* chunk for malloced ptr array */
4988
flag_t was_enabled; /* to disable mmap */
4989
size_t size;
4990
size_t i;
4991
4992
ensure_initialization();
4993
/* compute array length, if needed */
4994
if (chunks != 0) {
4995
if (n_elements == 0)
4996
return chunks; /* nothing to do */
4997
marray = chunks;
4998
array_size = 0;
4999
}
5000
else {
5001
/* if empty req, must still return chunk representing empty array */
5002
if (n_elements == 0)
5003
return (void**)internal_malloc(m, 0);
5004
marray = 0;
5005
array_size = request2size(n_elements * (sizeof(void*)));
5006
}
5007
5008
/* compute total element size */
5009
if (opts & 0x1) { /* all-same-size */
5010
element_size = request2size(*sizes);
5011
contents_size = n_elements * element_size;
5012
}
5013
else { /* add up all the sizes */
5014
element_size = 0;
5015
contents_size = 0;
5016
for (i = 0; i != n_elements; ++i)
5017
contents_size += request2size(sizes[i]);
5018
}
5019
5020
size = contents_size + array_size;
5021
5022
/*
5023
Allocate the aggregate chunk. First disable direct-mmapping so
5024
malloc won't use it, since we would not be able to later
5025
free/realloc space internal to a segregated mmap region.
5026
*/
5027
was_enabled = use_mmap(m);
5028
disable_mmap(m);
5029
mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5030
if (was_enabled)
5031
enable_mmap(m);
5032
if (mem == 0)
5033
return 0;
5034
5035
if (PREACTION(m)) return 0;
5036
p = mem2chunk(mem);
5037
remainder_size = chunksize(p);
5038
5039
assert(!is_mmapped(p));
5040
5041
if (opts & 0x2) { /* optionally clear the elements */
5042
memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5043
}
5044
5045
/* If not provided, allocate the pointer array as final part of chunk */
5046
if (marray == 0) {
5047
size_t array_chunk_size;
5048
array_chunk = chunk_plus_offset(p, contents_size);
5049
array_chunk_size = remainder_size - contents_size;
5050
marray = (void**) (chunk2mem(array_chunk));
5051
set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5052
remainder_size = contents_size;
5053
}
5054
5055
/* split out elements */
5056
for (i = 0; ; ++i) {
5057
marray[i] = chunk2mem(p);
5058
if (i != n_elements-1) {
5059
if (element_size != 0)
5060
size = element_size;
5061
else
5062
size = request2size(sizes[i]);
5063
remainder_size -= size;
5064
set_size_and_pinuse_of_inuse_chunk(m, p, size);
5065
p = chunk_plus_offset(p, size);
5066
}
5067
else { /* the final element absorbs any overallocation slop */
5068
set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5069
break;
5070
}
5071
}
5072
5073
#if DEBUG
5074
if (marray != chunks) {
5075
/* final element must have exactly exhausted chunk */
5076
if (element_size != 0) {
5077
assert(remainder_size == element_size);
5078
}
5079
else {
5080
assert(remainder_size == request2size(sizes[i]));
5081
}
5082
check_inuse_chunk(m, mem2chunk(marray));
5083
}
5084
for (i = 0; i != n_elements; ++i)
5085
check_inuse_chunk(m, mem2chunk(marray[i]));
5086
5087
#endif /* DEBUG */
5088
5089
POSTACTION(m);
5090
return marray;
5091
}
5092
5093
/* Try to free all pointers in the given array.
5094
Note: this could be made faster, by delaying consolidation,
5095
at the price of disabling some user integrity checks, We
5096
still optimize some consolidations by combining adjacent
5097
chunks before freeing, which will occur often if allocated
5098
with ialloc or the array is sorted.
5099
*/
5100
static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5101
size_t unfreed = 0;
5102
if (!PREACTION(m)) {
5103
void** a;
5104
void** fence = &(array[nelem]);
5105
for (a = array; a != fence; ++a) {
5106
void* mem = *a;
5107
if (mem != 0) {
5108
mchunkptr p = mem2chunk(mem);
5109
size_t psize = chunksize(p);
5110
#if FOOTERS
5111
if (get_mstate_for(p) != m) {
5112
++unfreed;
5113
continue;
5114
}
5115
#endif
5116
check_inuse_chunk(m, p);
5117
*a = 0;
5118
if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5119
void ** b = a + 1; /* try to merge with next chunk */
5120
mchunkptr next = next_chunk(p);
5121
if (b != fence && *b == chunk2mem(next)) {
5122
size_t newsize = chunksize(next) + psize;
5123
set_inuse(m, p, newsize);
5124
*b = chunk2mem(p);
5125
}
5126
else
5127
dispose_chunk(m, p, psize);
5128
}
5129
else {
5130
CORRUPTION_ERROR_ACTION(m);
5131
break;
5132
}
5133
}
5134
}
5135
if (should_trim(m, m->topsize))
5136
sys_trim(m, 0);
5137
POSTACTION(m);
5138
}
5139
return unfreed;
5140
}
5141
5142
/* Traversal */
5143
#if MALLOC_INSPECT_ALL
5144
static void internal_inspect_all(mstate m,
5145
void(*handler)(void *start,
5146
void *end,
5147
size_t used_bytes,
5148
void* callback_arg),
5149
void* arg) {
5150
if (is_initialized(m)) {
5151
mchunkptr top = m->top;
5152
msegmentptr s;
5153
for (s = &m->seg; s != 0; s = s->next) {
5154
mchunkptr q = align_as_chunk(s->base);
5155
while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5156
mchunkptr next = next_chunk(q);
5157
size_t sz = chunksize(q);
5158
size_t used;
5159
void* start;
5160
if (is_inuse(q)) {
5161
used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5162
start = chunk2mem(q);
5163
}
5164
else {
5165
used = 0;
5166
if (is_small(sz)) { /* offset by possible bookkeeping */
5167
start = (void*)((char*)q + sizeof(struct malloc_chunk));
5168
}
5169
else {
5170
start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5171
}
5172
}
5173
if (start < (void*)next) /* skip if all space is bookkeeping */
5174
handler(start, next, used, arg);
5175
if (q == top)
5176
break;
5177
q = next;
5178
}
5179
}
5180
}
5181
}
5182
#endif /* MALLOC_INSPECT_ALL */
5183
5184
/* ------------------ Exported realloc, memalign, etc -------------------- */
5185
5186
#if !ONLY_MSPACES
5187
5188
void* dlrealloc(void* oldmem, size_t bytes) {
5189
void* mem = 0;
5190
if (oldmem == 0) {
5191
mem = dlmalloc(bytes);
5192
}
5193
else if (bytes >= MAX_REQUEST) {
5194
MALLOC_FAILURE_ACTION;
5195
}
5196
#ifdef REALLOC_ZERO_BYTES_FREES
5197
else if (bytes == 0) {
5198
dlfree(oldmem);
5199
}
5200
#endif /* REALLOC_ZERO_BYTES_FREES */
5201
else {
5202
size_t nb = request2size(bytes);
5203
mchunkptr oldp = mem2chunk(oldmem);
5204
#if ! FOOTERS
5205
mstate m = gm;
5206
#else /* FOOTERS */
5207
mstate m = get_mstate_for(oldp);
5208
if (!ok_magic(m)) {
5209
USAGE_ERROR_ACTION(m, oldmem);
5210
return 0;
5211
}
5212
#endif /* FOOTERS */
5213
if (!PREACTION(m)) {
5214
mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5215
POSTACTION(m);
5216
if (newp != 0) {
5217
check_inuse_chunk(m, newp);
5218
mem = chunk2mem(newp);
5219
}
5220
else {
5221
mem = internal_malloc(m, bytes);
5222
if (mem != 0) {
5223
size_t oc = chunksize(oldp) - overhead_for(oldp);
5224
memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5225
internal_free(m, oldmem);
5226
}
5227
}
5228
}
5229
}
5230
return mem;
5231
}
5232
5233
void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5234
void* mem = 0;
5235
if (oldmem != 0) {
5236
if (bytes >= MAX_REQUEST) {
5237
MALLOC_FAILURE_ACTION;
5238
}
5239
else {
5240
size_t nb = request2size(bytes);
5241
mchunkptr oldp = mem2chunk(oldmem);
5242
#if ! FOOTERS
5243
mstate m = gm;
5244
#else /* FOOTERS */
5245
mstate m = get_mstate_for(oldp);
5246
if (!ok_magic(m)) {
5247
USAGE_ERROR_ACTION(m, oldmem);
5248
return 0;
5249
}
5250
#endif /* FOOTERS */
5251
if (!PREACTION(m)) {
5252
mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5253
POSTACTION(m);
5254
if (newp == oldp) {
5255
check_inuse_chunk(m, newp);
5256
mem = oldmem;
5257
}
5258
}
5259
}
5260
}
5261
return mem;
5262
}
5263
5264
void* dlmemalign(size_t alignment, size_t bytes) {
5265
if (alignment <= MALLOC_ALIGNMENT) {
5266
return dlmalloc(bytes);
5267
}
5268
return internal_memalign(gm, alignment, bytes);
5269
}
5270
5271
int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5272
void* mem = 0;
5273
if (alignment == MALLOC_ALIGNMENT)
5274
mem = dlmalloc(bytes);
5275
else {
5276
size_t d = alignment / sizeof(void*);
5277
size_t r = alignment % sizeof(void*);
5278
if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5279
return EINVAL;
5280
else if (bytes <= MAX_REQUEST - alignment) {
5281
if (alignment < MIN_CHUNK_SIZE)
5282
alignment = MIN_CHUNK_SIZE;
5283
mem = internal_memalign(gm, alignment, bytes);
5284
}
5285
}
5286
if (mem == 0)
5287
return ENOMEM;
5288
else {
5289
*pp = mem;
5290
return 0;
5291
}
5292
}
5293
5294
void* dlvalloc(size_t bytes) {
5295
size_t pagesz;
5296
ensure_initialization();
5297
pagesz = mparams.page_size;
5298
return dlmemalign(pagesz, bytes);
5299
}
5300
5301
void* dlpvalloc(size_t bytes) {
5302
size_t pagesz;
5303
ensure_initialization();
5304
pagesz = mparams.page_size;
5305
return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
5306
}
5307
5308
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5309
void* chunks[]) {
5310
size_t sz = elem_size; /* serves as 1-element array */
5311
return ialloc(gm, n_elements, &sz, 3, chunks);
5312
}
5313
5314
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5315
void* chunks[]) {
5316
return ialloc(gm, n_elements, sizes, 0, chunks);
5317
}
5318
5319
size_t dlbulk_free(void* array[], size_t nelem) {
5320
return internal_bulk_free(gm, array, nelem);
5321
}
5322
5323
#if MALLOC_INSPECT_ALL
5324
void dlmalloc_inspect_all(void(*handler)(void *start,
5325
void *end,
5326
size_t used_bytes,
5327
void* callback_arg),
5328
void* arg) {
5329
ensure_initialization();
5330
if (!PREACTION(gm)) {
5331
internal_inspect_all(gm, handler, arg);
5332
POSTACTION(gm);
5333
}
5334
}
5335
#endif /* MALLOC_INSPECT_ALL */
5336
5337
int dlmalloc_trim(size_t pad) {
5338
int result = 0;
5339
ensure_initialization();
5340
if (!PREACTION(gm)) {
5341
result = sys_trim(gm, pad);
5342
POSTACTION(gm);
5343
}
5344
return result;
5345
}
5346
5347
size_t dlmalloc_footprint(void) {
5348
return gm->footprint;
5349
}
5350
5351
size_t dlmalloc_max_footprint(void) {
5352
return gm->max_footprint;
5353
}
5354
5355
size_t dlmalloc_footprint_limit(void) {
5356
size_t maf = gm->footprint_limit;
5357
return maf == 0 ? MAX_SIZE_T : maf;
5358
}
5359
5360
size_t dlmalloc_set_footprint_limit(size_t bytes) {
5361
size_t result; /* invert sense of 0 */
5362
if (bytes == 0)
5363
result = granularity_align(1); /* Use minimal size */
5364
if (bytes == MAX_SIZE_T)
5365
result = 0; /* disable */
5366
else
5367
result = granularity_align(bytes);
5368
return gm->footprint_limit = result;
5369
}
5370
5371
#if !NO_MALLINFO
5372
struct mallinfo dlmallinfo(void) {
5373
return internal_mallinfo(gm);
5374
}
5375
#endif /* NO_MALLINFO */
5376
5377
#if !NO_MALLOC_STATS
5378
void dlmalloc_stats() {
5379
internal_malloc_stats(gm);
5380
}
5381
#endif /* NO_MALLOC_STATS */
5382
5383
int dlmallopt(int param_number, int value) {
5384
return change_mparam(param_number, value);
5385
}
5386
5387
size_t dlmalloc_usable_size(void* mem) {
5388
if (mem != 0) {
5389
mchunkptr p = mem2chunk(mem);
5390
if (is_inuse(p))
5391
return chunksize(p) - overhead_for(p);
5392
}
5393
return 0;
5394
}
5395
5396
#endif /* !ONLY_MSPACES */
5397
5398
/* ----------------------------- user mspaces ---------------------------- */
5399
5400
#if MSPACES
5401
5402
static mstate init_user_mstate(char* tbase, size_t tsize) {
5403
size_t msize = pad_request(sizeof(struct malloc_state));
5404
mchunkptr mn;
5405
mchunkptr msp = align_as_chunk(tbase);
5406
mstate m = (mstate)(chunk2mem(msp));
5407
memset(m, 0, msize);
5408
(void)INITIAL_LOCK(&m->mutex);
5409
msp->head = (msize|INUSE_BITS);
5410
m->seg.base = m->least_addr = tbase;
5411
m->seg.size = m->footprint = m->max_footprint = tsize;
5412
m->magic = mparams.magic;
5413
m->release_checks = MAX_RELEASE_CHECK_RATE;
5414
m->mflags = mparams.default_mflags;
5415
m->extp = 0;
5416
m->exts = 0;
5417
disable_contiguous(m);
5418
init_bins(m);
5419
mn = next_chunk(mem2chunk(m));
5420
init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5421
check_top_chunk(m, m->top);
5422
return m;
5423
}
5424
5425
mspace create_mspace(size_t capacity, int locked) {
5426
mstate m = 0;
5427
size_t msize;
5428
ensure_initialization();
5429
msize = pad_request(sizeof(struct malloc_state));
5430
if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5431
size_t rs = ((capacity == 0)? mparams.granularity :
5432
(capacity + TOP_FOOT_SIZE + msize));
5433
size_t tsize = granularity_align(rs);
5434
char* tbase = (char*)(CALL_MMAP(tsize));
5435
if (tbase != CMFAIL) {
5436
m = init_user_mstate(tbase, tsize);
5437
m->seg.sflags = USE_MMAP_BIT;
5438
set_lock(m, locked);
5439
}
5440
}
5441
return (mspace)m;
5442
}
5443
5444
mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5445
mstate m = 0;
5446
size_t msize;
5447
ensure_initialization();
5448
msize = pad_request(sizeof(struct malloc_state));
5449
if (capacity > msize + TOP_FOOT_SIZE &&
5450
capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5451
m = init_user_mstate((char*)base, capacity);
5452
m->seg.sflags = EXTERN_BIT;
5453
set_lock(m, locked);
5454
}
5455
return (mspace)m;
5456
}
5457
5458
int mspace_track_large_chunks(mspace msp, int enable) {
5459
int ret = 0;
5460
mstate ms = (mstate)msp;
5461
if (!PREACTION(ms)) {
5462
if (!use_mmap(ms)) {
5463
ret = 1;
5464
}
5465
if (!enable) {
5466
enable_mmap(ms);
5467
} else {
5468
disable_mmap(ms);
5469
}
5470
POSTACTION(ms);
5471
}
5472
return ret;
5473
}
5474
5475
size_t destroy_mspace(mspace msp) {
5476
size_t freed = 0;
5477
mstate ms = (mstate)msp;
5478
if (ok_magic(ms)) {
5479
msegmentptr sp = &ms->seg;
5480
(void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5481
while (sp != 0) {
5482
char* base = sp->base;
5483
size_t size = sp->size;
5484
flag_t flag = sp->sflags;
5485
(void)base; /* placate people compiling -Wunused-variable */
5486
sp = sp->next;
5487
if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5488
CALL_MUNMAP(base, size) == 0)
5489
freed += size;
5490
}
5491
}
5492
else {
5493
USAGE_ERROR_ACTION(ms,ms);
5494
}
5495
return freed;
5496
}
5497
5498
/*
5499
mspace versions of routines are near-clones of the global
5500
versions. This is not so nice but better than the alternatives.
5501
*/
5502
5503
void* mspace_malloc(mspace msp, size_t bytes) {
5504
mstate ms = (mstate)msp;
5505
if (!ok_magic(ms)) {
5506
USAGE_ERROR_ACTION(ms,ms);
5507
return 0;
5508
}
5509
if (!PREACTION(ms)) {
5510
void* mem;
5511
size_t nb;
5512
if (bytes <= MAX_SMALL_REQUEST) {
5513
bindex_t idx;
5514
binmap_t smallbits;
5515
nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5516
idx = small_index(nb);
5517
smallbits = ms->smallmap >> idx;
5518
5519
if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5520
mchunkptr b, p;
5521
idx += ~smallbits & 1; /* Uses next bin if idx empty */
5522
b = smallbin_at(ms, idx);
5523
p = b->fd;
5524
assert(chunksize(p) == small_index2size(idx));
5525
unlink_first_small_chunk(ms, b, p, idx);
5526
set_inuse_and_pinuse(ms, p, small_index2size(idx));
5527
mem = chunk2mem(p);
5528
check_malloced_chunk(ms, mem, nb);
5529
goto postaction;
5530
}
5531
5532
else if (nb > ms->dvsize) {
5533
if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5534
mchunkptr b, p, r;
5535
size_t rsize;
5536
bindex_t i;
5537
binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5538
binmap_t leastbit = least_bit(leftbits);
5539
compute_bit2idx(leastbit, i);
5540
b = smallbin_at(ms, i);
5541
p = b->fd;
5542
assert(chunksize(p) == small_index2size(i));
5543
unlink_first_small_chunk(ms, b, p, i);
5544
rsize = small_index2size(i) - nb;
5545
/* Fit here cannot be remainderless if 4byte sizes */
5546
if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5547
set_inuse_and_pinuse(ms, p, small_index2size(i));
5548
else {
5549
set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5550
r = chunk_plus_offset(p, nb);
5551
set_size_and_pinuse_of_free_chunk(r, rsize);
5552
replace_dv(ms, r, rsize);
5553
}
5554
mem = chunk2mem(p);
5555
check_malloced_chunk(ms, mem, nb);
5556
goto postaction;
5557
}
5558
5559
else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5560
check_malloced_chunk(ms, mem, nb);
5561
goto postaction;
5562
}
5563
}
5564
}
5565
else if (bytes >= MAX_REQUEST)
5566
nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5567
else {
5568
nb = pad_request(bytes);
5569
if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5570
check_malloced_chunk(ms, mem, nb);
5571
goto postaction;
5572
}
5573
}
5574
5575
if (nb <= ms->dvsize) {
5576
size_t rsize = ms->dvsize - nb;
5577
mchunkptr p = ms->dv;
5578
if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5579
mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5580
ms->dvsize = rsize;
5581
set_size_and_pinuse_of_free_chunk(r, rsize);
5582
set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5583
}
5584
else { /* exhaust dv */
5585
size_t dvs = ms->dvsize;
5586
ms->dvsize = 0;
5587
ms->dv = 0;
5588
set_inuse_and_pinuse(ms, p, dvs);
5589
}
5590
mem = chunk2mem(p);
5591
check_malloced_chunk(ms, mem, nb);
5592
goto postaction;
5593
}
5594
5595
else if (nb < ms->topsize) { /* Split top */
5596
size_t rsize = ms->topsize -= nb;
5597
mchunkptr p = ms->top;
5598
mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5599
r->head = rsize | PINUSE_BIT;
5600
set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5601
mem = chunk2mem(p);
5602
check_top_chunk(ms, ms->top);
5603
check_malloced_chunk(ms, mem, nb);
5604
goto postaction;
5605
}
5606
5607
mem = sys_alloc(ms, nb);
5608
5609
postaction:
5610
POSTACTION(ms);
5611
return mem;
5612
}
5613
5614
return 0;
5615
}
5616
5617
void mspace_free(mspace msp, void* mem) {
5618
if (mem != 0) {
5619
mchunkptr p = mem2chunk(mem);
5620
#if FOOTERS
5621
mstate fm = get_mstate_for(p);
5622
(void)msp; /* placate people compiling -Wunused */
5623
#else /* FOOTERS */
5624
mstate fm = (mstate)msp;
5625
#endif /* FOOTERS */
5626
if (!ok_magic(fm)) {
5627
USAGE_ERROR_ACTION(fm, p);
5628
return;
5629
}
5630
if (!PREACTION(fm)) {
5631
check_inuse_chunk(fm, p);
5632
if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5633
size_t psize = chunksize(p);
5634
mchunkptr next = chunk_plus_offset(p, psize);
5635
if (!pinuse(p)) {
5636
size_t prevsize = p->prev_foot;
5637
if (is_mmapped(p)) {
5638
psize += prevsize + MMAP_FOOT_PAD;
5639
if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5640
fm->footprint -= psize;
5641
goto postaction;
5642
}
5643
else {
5644
mchunkptr prev = chunk_minus_offset(p, prevsize);
5645
psize += prevsize;
5646
p = prev;
5647
if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5648
if (p != fm->dv) {
5649
unlink_chunk(fm, p, prevsize);
5650
}
5651
else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5652
fm->dvsize = psize;
5653
set_free_with_pinuse(p, psize, next);
5654
goto postaction;
5655
}
5656
}
5657
else
5658
goto erroraction;
5659
}
5660
}
5661
5662
if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5663
if (!cinuse(next)) { /* consolidate forward */
5664
if (next == fm->top) {
5665
size_t tsize = fm->topsize += psize;
5666
fm->top = p;
5667
p->head = tsize | PINUSE_BIT;
5668
if (p == fm->dv) {
5669
fm->dv = 0;
5670
fm->dvsize = 0;
5671
}
5672
if (should_trim(fm, tsize))
5673
sys_trim(fm, 0);
5674
goto postaction;
5675
}
5676
else if (next == fm->dv) {
5677
size_t dsize = fm->dvsize += psize;
5678
fm->dv = p;
5679
set_size_and_pinuse_of_free_chunk(p, dsize);
5680
goto postaction;
5681
}
5682
else {
5683
size_t nsize = chunksize(next);
5684
psize += nsize;
5685
unlink_chunk(fm, next, nsize);
5686
set_size_and_pinuse_of_free_chunk(p, psize);
5687
if (p == fm->dv) {
5688
fm->dvsize = psize;
5689
goto postaction;
5690
}
5691
}
5692
}
5693
else
5694
set_free_with_pinuse(p, psize, next);
5695
5696
if (is_small(psize)) {
5697
insert_small_chunk(fm, p, psize);
5698
check_free_chunk(fm, p);
5699
}
5700
else {
5701
tchunkptr tp = (tchunkptr)p;
5702
insert_large_chunk(fm, tp, psize);
5703
check_free_chunk(fm, p);
5704
if (--fm->release_checks == 0)
5705
release_unused_segments(fm);
5706
}
5707
goto postaction;
5708
}
5709
}
5710
erroraction:
5711
USAGE_ERROR_ACTION(fm, p);
5712
postaction:
5713
POSTACTION(fm);
5714
}
5715
}
5716
}
5717
5718
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5719
void* mem;
5720
size_t req = 0;
5721
mstate ms = (mstate)msp;
5722
if (!ok_magic(ms)) {
5723
USAGE_ERROR_ACTION(ms,ms);
5724
return 0;
5725
}
5726
if (n_elements != 0) {
5727
req = n_elements * elem_size;
5728
if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5729
(req / n_elements != elem_size))
5730
req = MAX_SIZE_T; /* force downstream failure on overflow */
5731
}
5732
mem = internal_malloc(ms, req);
5733
if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5734
memset(mem, 0, req);
5735
return mem;
5736
}
5737
5738
void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5739
void* mem = 0;
5740
if (oldmem == 0) {
5741
mem = mspace_malloc(msp, bytes);
5742
}
5743
else if (bytes >= MAX_REQUEST) {
5744
MALLOC_FAILURE_ACTION;
5745
}
5746
#ifdef REALLOC_ZERO_BYTES_FREES
5747
else if (bytes == 0) {
5748
mspace_free(msp, oldmem);
5749
}
5750
#endif /* REALLOC_ZERO_BYTES_FREES */
5751
else {
5752
size_t nb = request2size(bytes);
5753
mchunkptr oldp = mem2chunk(oldmem);
5754
#if ! FOOTERS
5755
mstate m = (mstate)msp;
5756
#else /* FOOTERS */
5757
mstate m = get_mstate_for(oldp);
5758
if (!ok_magic(m)) {
5759
USAGE_ERROR_ACTION(m, oldmem);
5760
return 0;
5761
}
5762
#endif /* FOOTERS */
5763
if (!PREACTION(m)) {
5764
mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5765
POSTACTION(m);
5766
if (newp != 0) {
5767
check_inuse_chunk(m, newp);
5768
mem = chunk2mem(newp);
5769
}
5770
else {
5771
mem = mspace_malloc(m, bytes);
5772
if (mem != 0) {
5773
size_t oc = chunksize(oldp) - overhead_for(oldp);
5774
memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5775
mspace_free(m, oldmem);
5776
}
5777
}
5778
}
5779
}
5780
return mem;
5781
}
5782
5783
void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5784
void* mem = 0;
5785
if (oldmem != 0) {
5786
if (bytes >= MAX_REQUEST) {
5787
MALLOC_FAILURE_ACTION;
5788
}
5789
else {
5790
size_t nb = request2size(bytes);
5791
mchunkptr oldp = mem2chunk(oldmem);
5792
#if ! FOOTERS
5793
mstate m = (mstate)msp;
5794
#else /* FOOTERS */
5795
mstate m = get_mstate_for(oldp);
5796
(void)msp; /* placate people compiling -Wunused */
5797
if (!ok_magic(m)) {
5798
USAGE_ERROR_ACTION(m, oldmem);
5799
return 0;
5800
}
5801
#endif /* FOOTERS */
5802
if (!PREACTION(m)) {
5803
mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5804
POSTACTION(m);
5805
if (newp == oldp) {
5806
check_inuse_chunk(m, newp);
5807
mem = oldmem;
5808
}
5809
}
5810
}
5811
}
5812
return mem;
5813
}
5814
5815
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5816
mstate ms = (mstate)msp;
5817
if (!ok_magic(ms)) {
5818
USAGE_ERROR_ACTION(ms,ms);
5819
return 0;
5820
}
5821
if (alignment <= MALLOC_ALIGNMENT)
5822
return mspace_malloc(msp, bytes);
5823
return internal_memalign(ms, alignment, bytes);
5824
}
5825
5826
void** mspace_independent_calloc(mspace msp, size_t n_elements,
5827
size_t elem_size, void* chunks[]) {
5828
size_t sz = elem_size; /* serves as 1-element array */
5829
mstate ms = (mstate)msp;
5830
if (!ok_magic(ms)) {
5831
USAGE_ERROR_ACTION(ms,ms);
5832
return 0;
5833
}
5834
return ialloc(ms, n_elements, &sz, 3, chunks);
5835
}
5836
5837
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5838
size_t sizes[], void* chunks[]) {
5839
mstate ms = (mstate)msp;
5840
if (!ok_magic(ms)) {
5841
USAGE_ERROR_ACTION(ms,ms);
5842
return 0;
5843
}
5844
return ialloc(ms, n_elements, sizes, 0, chunks);
5845
}
5846
5847
size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5848
return internal_bulk_free((mstate)msp, array, nelem);
5849
}
5850
5851
#if MALLOC_INSPECT_ALL
5852
void mspace_inspect_all(mspace msp,
5853
void(*handler)(void *start,
5854
void *end,
5855
size_t used_bytes,
5856
void* callback_arg),
5857
void* arg) {
5858
mstate ms = (mstate)msp;
5859
if (ok_magic(ms)) {
5860
if (!PREACTION(ms)) {
5861
internal_inspect_all(ms, handler, arg);
5862
POSTACTION(ms);
5863
}
5864
}
5865
else {
5866
USAGE_ERROR_ACTION(ms,ms);
5867
}
5868
}
5869
#endif /* MALLOC_INSPECT_ALL */
5870
5871
int mspace_trim(mspace msp, size_t pad) {
5872
int result = 0;
5873
mstate ms = (mstate)msp;
5874
if (ok_magic(ms)) {
5875
if (!PREACTION(ms)) {
5876
result = sys_trim(ms, pad);
5877
POSTACTION(ms);
5878
}
5879
}
5880
else {
5881
USAGE_ERROR_ACTION(ms,ms);
5882
}
5883
return result;
5884
}
5885
5886
#if !NO_MALLOC_STATS
5887
void mspace_malloc_stats(mspace msp) {
5888
mstate ms = (mstate)msp;
5889
if (ok_magic(ms)) {
5890
internal_malloc_stats(ms);
5891
}
5892
else {
5893
USAGE_ERROR_ACTION(ms,ms);
5894
}
5895
}
5896
#endif /* NO_MALLOC_STATS */
5897
5898
size_t mspace_footprint(mspace msp) {
5899
size_t result = 0;
5900
mstate ms = (mstate)msp;
5901
if (ok_magic(ms)) {
5902
result = ms->footprint;
5903
}
5904
else {
5905
USAGE_ERROR_ACTION(ms,ms);
5906
}
5907
return result;
5908
}
5909
5910
size_t mspace_max_footprint(mspace msp) {
5911
size_t result = 0;
5912
mstate ms = (mstate)msp;
5913
if (ok_magic(ms)) {
5914
result = ms->max_footprint;
5915
}
5916
else {
5917
USAGE_ERROR_ACTION(ms,ms);
5918
}
5919
return result;
5920
}
5921
5922
size_t mspace_footprint_limit(mspace msp) {
5923
size_t result = 0;
5924
mstate ms = (mstate)msp;
5925
if (ok_magic(ms)) {
5926
size_t maf = ms->footprint_limit;
5927
result = (maf == 0) ? MAX_SIZE_T : maf;
5928
}
5929
else {
5930
USAGE_ERROR_ACTION(ms,ms);
5931
}
5932
return result;
5933
}
5934
5935
size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5936
size_t result = 0;
5937
mstate ms = (mstate)msp;
5938
if (ok_magic(ms)) {
5939
if (bytes == 0)
5940
result = granularity_align(1); /* Use minimal size */
5941
if (bytes == MAX_SIZE_T)
5942
result = 0; /* disable */
5943
else
5944
result = granularity_align(bytes);
5945
ms->footprint_limit = result;
5946
}
5947
else {
5948
USAGE_ERROR_ACTION(ms,ms);
5949
}
5950
return result;
5951
}
5952
5953
#if !NO_MALLINFO
5954
struct mallinfo mspace_mallinfo(mspace msp) {
5955
mstate ms = (mstate)msp;
5956
if (!ok_magic(ms)) {
5957
USAGE_ERROR_ACTION(ms,ms);
5958
}
5959
return internal_mallinfo(ms);
5960
}
5961
#endif /* NO_MALLINFO */
5962
5963
size_t mspace_usable_size(const void* mem) {
5964
if (mem != 0) {
5965
mchunkptr p = mem2chunk(mem);
5966
if (is_inuse(p))
5967
return chunksize(p) - overhead_for(p);
5968
}
5969
return 0;
5970
}
5971
5972
int mspace_mallopt(int param_number, int value) {
5973
return change_mparam(param_number, value);
5974
}
5975
5976
#endif /* MSPACES */
5977
5978
5979
/* -------------------- Alternative MORECORE functions ------------------- */
5980
5981
/*
5982
Guidelines for creating a custom version of MORECORE:
5983
5984
* For best performance, MORECORE should allocate in multiples of pagesize.
5985
* MORECORE may allocate more memory than requested. (Or even less,
5986
but this will usually result in a malloc failure.)
5987
* MORECORE must not allocate memory when given argument zero, but
5988
instead return one past the end address of memory from previous
5989
nonzero call.
5990
* For best performance, consecutive calls to MORECORE with positive
5991
arguments should return increasing addresses, indicating that
5992
space has been contiguously extended.
5993
* Even though consecutive calls to MORECORE need not return contiguous
5994
addresses, it must be OK for malloc'ed chunks to span multiple
5995
regions in those cases where they do happen to be contiguous.
5996
* MORECORE need not handle negative arguments -- it may instead
5997
just return MFAIL when given negative arguments.
5998
Negative arguments are always multiples of pagesize. MORECORE
5999
must not misinterpret negative args as large positive unsigned
6000
args. You can suppress all such calls from even occurring by defining
6001
MORECORE_CANNOT_TRIM,
6002
6003
As an example alternative MORECORE, here is a custom allocator
6004
kindly contributed for pre-OSX macOS. It uses virtually but not
6005
necessarily physically contiguous non-paged memory (locked in,
6006
present and won't get swapped out). You can use it by uncommenting
6007
this section, adding some #includes, and setting up the appropriate
6008
defines above:
6009
6010
#define MORECORE osMoreCore
6011
6012
There is also a shutdown routine that should somehow be called for
6013
cleanup upon program exit.
6014
6015
#define MAX_POOL_ENTRIES 100
6016
#define MINIMUM_MORECORE_SIZE (64 * 1024U)
6017
static int next_os_pool;
6018
void *our_os_pools[MAX_POOL_ENTRIES];
6019
6020
void *osMoreCore(int size)
6021
{
6022
void *ptr = 0;
6023
static void *sbrk_top = 0;
6024
6025
if (size > 0)
6026
{
6027
if (size < MINIMUM_MORECORE_SIZE)
6028
size = MINIMUM_MORECORE_SIZE;
6029
if (CurrentExecutionLevel() == kTaskLevel)
6030
ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6031
if (ptr == 0)
6032
{
6033
return (void *) MFAIL;
6034
}
6035
// save ptrs so they can be freed during cleanup
6036
our_os_pools[next_os_pool] = ptr;
6037
next_os_pool++;
6038
ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6039
sbrk_top = (char *) ptr + size;
6040
return ptr;
6041
}
6042
else if (size < 0)
6043
{
6044
// we don't currently support shrink behavior
6045
return (void *) MFAIL;
6046
}
6047
else
6048
{
6049
return sbrk_top;
6050
}
6051
}
6052
6053
// cleanup any allocated memory pools
6054
// called as last thing before shutting down driver
6055
6056
void osCleanupMem(void)
6057
{
6058
void **ptr;
6059
6060
for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6061
if (*ptr)
6062
{
6063
PoolDeallocate(*ptr);
6064
*ptr = 0;
6065
}
6066
}
6067
6068
*/
6069
6070
6071
/* -----------------------------------------------------------------------
6072
History:
6073
v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6074
* fix bad comparison in dlposix_memalign
6075
* don't reuse adjusted asize in sys_alloc
6076
* add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6077
* reduce compiler warnings -- thanks to all who reported/suggested these
6078
6079
v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6080
* Always perform unlink checks unless INSECURE
6081
* Add posix_memalign.
6082
* Improve realloc to expand in more cases; expose realloc_in_place.
6083
Thanks to Peter Buhr for the suggestion.
6084
* Add footprint_limit, inspect_all, bulk_free. Thanks
6085
to Barry Hayes and others for the suggestions.
6086
* Internal refactorings to avoid calls while holding locks
6087
* Use non-reentrant locks by default. Thanks to Roland McGrath
6088
for the suggestion.
6089
* Small fixes to mspace_destroy, reset_on_error.
6090
* Various configuration extensions/changes. Thanks
6091
to all who contributed these.
6092
6093
V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6094
* Update Creative Commons URL
6095
6096
V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6097
* Use zeros instead of prev foot for is_mmapped
6098
* Add mspace_track_large_chunks; thanks to Jean Brouwers
6099
* Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6100
* Fix insufficient sys_alloc padding when using 16byte alignment
6101
* Fix bad error check in mspace_footprint
6102
* Adaptations for ptmalloc; thanks to Wolfram Gloger.
6103
* Reentrant spin locks; thanks to Earl Chew and others
6104
* Win32 improvements; thanks to Niall Douglas and Earl Chew
6105
* Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6106
* Extension hook in malloc_state
6107
* Various small adjustments to reduce warnings on some compilers
6108
* Various configuration extensions/changes for more platforms. Thanks
6109
to all who contributed these.
6110
6111
V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6112
* Add max_footprint functions
6113
* Ensure all appropriate literals are size_t
6114
* Fix conditional compilation problem for some #define settings
6115
* Avoid concatenating segments with the one provided
6116
in create_mspace_with_base
6117
* Rename some variables to avoid compiler shadowing warnings
6118
* Use explicit lock initialization.
6119
* Better handling of sbrk interference.
6120
* Simplify and fix segment insertion, trimming and mspace_destroy
6121
* Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6122
* Thanks especially to Dennis Flanagan for help on these.
6123
6124
V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6125
* Fix memalign brace error.
6126
6127
V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6128
* Fix improper #endif nesting in C++
6129
* Add explicit casts needed for C++
6130
6131
V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6132
* Use trees for large bins
6133
* Support mspaces
6134
* Use segments to unify sbrk-based and mmap-based system allocation,
6135
removing need for emulation on most platforms without sbrk.
6136
* Default safety checks
6137
* Optional footer checks. Thanks to William Robertson for the idea.
6138
* Internal code refactoring
6139
* Incorporate suggestions and platform-specific changes.
6140
Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6141
Aaron Bachmann, Emery Berger, and others.
6142
* Speed up non-fastbin processing enough to remove fastbins.
6143
* Remove useless cfree() to avoid conflicts with other apps.
6144
* Remove internal memcpy, memset. Compilers handle builtins better.
6145
* Remove some options that no one ever used and rename others.
6146
6147
V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6148
* Fix malloc_state bitmap array misdeclaration
6149
6150
V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6151
* Allow tuning of FIRST_SORTED_BIN_SIZE
6152
* Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6153
* Better detection and support for non-contiguousness of MORECORE.
6154
Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6155
* Bypass most of malloc if no frees. Thanks To Emery Berger.
6156
* Fix freeing of old top non-contiguous chunk im sysmalloc.
6157
* Raised default trim and map thresholds to 256K.
6158
* Fix mmap-related #defines. Thanks to Lubos Lunak.
6159
* Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6160
* Branch-free bin calculation
6161
* Default trim and mmap thresholds now 256K.
6162
6163
V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6164
* Introduce independent_comalloc and independent_calloc.
6165
Thanks to Michael Pachos for motivation and help.
6166
* Make optional .h file available
6167
* Allow > 2GB requests on 32bit systems.
6168
* new WIN32 sbrk, mmap, munmap, lock code from <[email protected]>.
6169
Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6170
and Anonymous.
6171
* Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6172
helping test this.)
6173
* memalign: check alignment arg
6174
* realloc: don't try to shift chunks backwards, since this
6175
leads to more fragmentation in some programs and doesn't
6176
seem to help in any others.
6177
* Collect all cases in malloc requiring system memory into sysmalloc
6178
* Use mmap as backup to sbrk
6179
* Place all internal state in malloc_state
6180
* Introduce fastbins (although similar to 2.5.1)
6181
* Many minor tunings and cosmetic improvements
6182
* Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6183
* Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6184
Thanks to Tony E. Bennett <[email protected]> and others.
6185
* Include errno.h to support default failure action.
6186
6187
V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6188
* return null for negative arguments
6189
* Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6190
* Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6191
(e.g. WIN32 platforms)
6192
* Cleanup header file inclusion for WIN32 platforms
6193
* Cleanup code to avoid Microsoft Visual C++ compiler complaints
6194
* Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6195
memory allocation routines
6196
* Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6197
* Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6198
usage of 'assert' in non-WIN32 code
6199
* Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6200
avoid infinite loop
6201
* Always call 'fREe()' rather than 'free()'
6202
6203
V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6204
* Fixed ordering problem with boundary-stamping
6205
6206
V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6207
* Added pvalloc, as recommended by H.J. Liu
6208
* Added 64bit pointer support mainly from Wolfram Gloger
6209
* Added anonymously donated WIN32 sbrk emulation
6210
* Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6211
* malloc_extend_top: fix mask error that caused wastage after
6212
foreign sbrks
6213
* Add linux mremap support code from HJ Liu
6214
6215
V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6216
* Integrated most documentation with the code.
6217
* Add support for mmap, with help from
6218
Wolfram Gloger ([email protected]).
6219
* Use last_remainder in more cases.
6220
* Pack bins using idea from [email protected]
6221
* Use ordered bins instead of best-fit threshhold
6222
* Eliminate block-local decls to simplify tracing and debugging.
6223
* Support another case of realloc via move into top
6224
* Fix error occuring when initial sbrk_base not word-aligned.
6225
* Rely on page size for units instead of SBRK_UNIT to
6226
avoid surprises about sbrk alignment conventions.
6227
* Add mallinfo, mallopt. Thanks to Raymond Nijssen
6228
([email protected]) for the suggestion.
6229
* Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6230
* More precautions for cases where other routines call sbrk,
6231
courtesy of Wolfram Gloger ([email protected]).
6232
* Added macros etc., allowing use in linux libc from
6233
H.J. Lu ([email protected])
6234
* Inverted this history list
6235
6236
V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6237
* Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6238
* Removed all preallocation code since under current scheme
6239
the work required to undo bad preallocations exceeds
6240
the work saved in good cases for most test programs.
6241
* No longer use return list or unconsolidated bins since
6242
no scheme using them consistently outperforms those that don't
6243
given above changes.
6244
* Use best fit for very large chunks to prevent some worst-cases.
6245
* Added some support for debugging
6246
6247
V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6248
* Removed footers when chunks are in use. Thanks to
6249
Paul Wilson ([email protected]) for the suggestion.
6250
6251
V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6252
* Added malloc_trim, with help from Wolfram Gloger
6253
([email protected]).
6254
6255
V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6256
6257
V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6258
* realloc: try to expand in both directions
6259
* malloc: swap order of clean-bin strategy;
6260
* realloc: only conditionally expand backwards
6261
* Try not to scavenge used bins
6262
* Use bin counts as a guide to preallocation
6263
* Occasionally bin return list chunks in first scan
6264
* Add a few optimizations from [email protected]
6265
6266
V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6267
* faster bin computation & slightly different binning
6268
* merged all consolidations to one part of malloc proper
6269
(eliminating old malloc_find_space & malloc_clean_bin)
6270
* Scan 2 returns chunks (not just 1)
6271
* Propagate failure in realloc if malloc returns 0
6272
* Add stuff to allow compilation on non-ANSI compilers
6273
from [email protected]
6274
6275
V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6276
* removed potential for odd address access in prev_chunk
6277
* removed dependency on getpagesize.h
6278
* misc cosmetics and a bit more internal documentation
6279
* anticosmetics: mangled names in macros to evade debugger strangeness
6280
* tested on sparc, hp-700, dec-mips, rs6000
6281
with gcc & native cc (hp, dec only) allowing
6282
Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6283
6284
Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6285
* Based loosely on libg++-1.2X malloc. (It retains some of the overall
6286
structure of old version, but most details differ.)
6287
6288
*/
6289
#endif
6290
6291