Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Roblox
GitHub Repository: Roblox/luau
Path: blob/master/VM/src/lmem.cpp
2725 views
1
// This file is part of the Luau programming language and is licensed under MIT License; see LICENSE.txt for details
2
// This code is based on Lua 5.x implementation licensed under MIT License; see lua_LICENSE.txt for details
3
#include "lmem.h"
4
5
#include "lstate.h"
6
#include "ldo.h"
7
#include "ldebug.h"
8
9
#include <string.h>
10
11
/*
12
* Luau heap uses a size-segregated page structure, with individual pages and large allocations
13
* allocated using system heap (via frealloc callback).
14
*
15
* frealloc callback serves as a general, if slow, allocation callback that can allocate, free or
16
* resize allocations:
17
*
18
* void* frealloc(void* ud, void* ptr, size_t oldsize, size_t newsize);
19
*
20
* frealloc(ud, NULL, 0, x) creates a new block of size x
21
* frealloc(ud, p, x, 0) frees the block p (must return NULL)
22
* frealloc(ud, NULL, 0, 0) does nothing, equivalent to free(NULL)
23
*
24
* frealloc returns NULL if it cannot create or reallocate the area
25
* (any reallocation to an equal or smaller size cannot fail!)
26
*
27
* On top of this, Luau implements heap storage which is split into two types of allocations:
28
*
29
* - GCO, short for "garbage collected objects"
30
* - other objects (for example, arrays stored inside table objects)
31
*
32
* The heap layout for these two allocation types is a bit different.
33
*
34
* All GCO are allocated in pages, which is a block of memory of ~16K in size that has a page header
35
* (lua_Page). Each page contains 1..N blocks of the same size, where N is selected to fill the page
36
* completely. This amortizes the allocation cost and increases locality. Each GCO block starts with
37
* the GC header (GCheader) which contains the object type, mark bits and other GC metadata. If the
38
* GCO block is free (not used), then it must have the type set to TNIL; in this case the block can
39
* be part of the per-page free list, the link for that list is stored after the header (freegcolink).
40
*
41
* Importantly, the GCO block doesn't have any back references to the page it's allocated in, so it's
42
* impossible to free it in isolation - GCO blocks are freed by sweeping the pages they belong to,
43
* using luaM_freegco which must specify the page; this is called by page sweeper that traverses the
44
* entire page's worth of objects. For this reason it's also important that freed GCO blocks keep the
45
* GC header intact and accessible (with type = NIL) so that the sweeper can access it.
46
*
47
* Some GCOs are too large to fit in a 16K page without excessive fragmentation (the size threshold is
48
* currently 512 bytes); in this case, we allocate a dedicated small page with just a single block's worth
49
* storage space, but that requires allocating an extra page header. In effect large GCOs are a little bit
50
* less memory efficient, but this allows us to uniformly sweep small and large GCOs using page lists.
51
*
52
* All GCO pages are linked in a large intrusive linked list (global_State::allgcopages). Additionally,
53
* for each block size there's a page free list that contains pages that have at least one free block
54
* (global_State::freegcopages). This free list is used to make sure object allocation is O(1).
55
*
56
* When LUAU_ASSERTENABLED is enabled, all non-GCO pages are also linked in a list (global_State::allpages).
57
* Because this list is not strictly required for runtime operations, it is only tracked for the purposes of
58
* debugging. While overhead of linking those pages together is very small, unnecessary operations are avoided.
59
*
60
* Compared to GCOs, regular allocations have two important differences: they can be freed in isolation,
61
* and they don't start with a GC header. Because of this, each allocation is prefixed with block metadata,
62
* which contains the pointer to the page for allocated blocks, and the pointer to the next free block
63
* inside the page for freed blocks.
64
* For regular allocations that are too large to fit in a page (using the same threshold of 512 bytes),
65
* we don't allocate a separate page, instead simply using frealloc to allocate a vanilla block of memory.
66
*
67
* Just like GCO pages, we store a page free list (global_State::freepages) that allows O(1) allocation;
68
* there is no global list for non-GCO pages since we never need to traverse them directly.
69
*
70
* In both cases, we pick the page by computing the size class from the block size which rounds the block
71
* size up to reduce the chance that we'll allocate pages that have very few allocated blocks. The size
72
* class strategy is determined by SizeClassConfig constructor.
73
*
74
* Note that when the last block in a page is freed, we immediately free the page with frealloc - the
75
* memory manager doesn't currently attempt to keep unused memory around. This can result in excessive
76
* allocation traffic and can be mitigated by adding a page cache in the future.
77
*
78
* For both GCO and non-GCO pages, the per-page block allocation combines bump pointer style allocation
79
* (lua_Page::freeNext) and per-page free list (lua_Page::freeList). We use the bump allocator to allocate
80
* the contents of the page, and the free list for further reuse; this allows shorter page setup times
81
* which results in less variance between allocation cost, as well as tighter sweep bounds for newly
82
* allocated pages.
83
*/
84
85
#ifndef __has_feature
86
#define __has_feature(x) 0
87
#endif
88
89
#if __has_feature(address_sanitizer) || defined(LUAU_ENABLE_ASAN)
90
#include <sanitizer/asan_interface.h>
91
#define ASAN_POISON_MEMORY_REGION(addr, size) __asan_poison_memory_region((addr), (size))
92
#define ASAN_UNPOISON_MEMORY_REGION(addr, size) __asan_unpoison_memory_region((addr), (size))
93
#else
94
#define ASAN_POISON_MEMORY_REGION(addr, size) (void)0
95
#define ASAN_UNPOISON_MEMORY_REGION(addr, size) (void)0
96
#endif
97
98
/*
99
* The sizes of most Luau objects aren't crucial for code correctness, but they are crucial for memory efficiency
100
* To prevent some of them accidentally growing and us losing memory without realizing it, we're going to lock
101
* the sizes of all critical structures down.
102
*/
103
#if defined(__APPLE__)
104
#define ABISWITCH(x64, ms32, gcc32) (sizeof(void*) == 8 ? x64 : gcc32)
105
#elif defined(__i386__) && defined(__MINGW32__) && !defined(__MINGW64__)
106
#define ABISWITCH(x64, ms32, gcc32) (ms32)
107
#elif defined(__i386__) && !defined(_MSC_VER)
108
#define ABISWITCH(x64, ms32, gcc32) (gcc32)
109
#else
110
// Android somehow uses a similar ABI to MSVC, *not* to iOS...
111
#define ABISWITCH(x64, ms32, gcc32) (sizeof(void*) == 8 ? x64 : ms32)
112
#endif
113
114
#if LUA_VECTOR_SIZE == 4
115
static_assert(sizeof(TValue) == ABISWITCH(24, 24, 24), "size mismatch for value");
116
static_assert(sizeof(LuaNode) == ABISWITCH(48, 48, 48), "size mismatch for table entry");
117
#else
118
static_assert(sizeof(TValue) == ABISWITCH(16, 16, 16), "size mismatch for value");
119
static_assert(sizeof(LuaNode) == ABISWITCH(32, 32, 32), "size mismatch for table entry");
120
#endif
121
122
static_assert(offsetof(TString, data) == ABISWITCH(24, 20, 20), "size mismatch for string header");
123
static_assert(sizeof(LuaTable) == ABISWITCH(48, 32, 32), "size mismatch for table header");
124
static_assert(offsetof(Buffer, data) == ABISWITCH(8, 8, 8), "size mismatch for buffer header");
125
126
// The userdata is designed to provide 16 byte alignment for 16 byte and larger userdata sizes
127
static_assert(offsetof(Udata, data) == 16, "data must be at precise offset provide proper alignment");
128
129
const size_t kSizeClasses = LUA_SIZECLASSES;
130
131
// Controls the number of entries in SizeClassConfig and define the maximum possible paged allocation size
132
// Modifications require updates the SizeClassConfig initialization
133
const size_t kMaxSmallSize = 1024;
134
135
// Effective limit on object size to use paged allocation
136
// Can be modified without additional changes to code, provided it is smaller or equal to kMaxSmallSize
137
const size_t kMaxSmallSizeUsed = 1024;
138
139
const size_t kLargePageThreshold = 512; // larger pages are used for objects larger than this size to fit more of them into a page
140
141
// constant factor to reduce our page sizes by, to increase the chances that pages we allocate will
142
// allow external allocators to allocate them without wasting space due to rounding introduced by their heap meta data
143
const size_t kExternalAllocatorMetaDataReduction = 24;
144
145
const size_t kSmallPageSize = 16 * 1024 - kExternalAllocatorMetaDataReduction;
146
const size_t kLargePageSize = 32 * 1024 - kExternalAllocatorMetaDataReduction;
147
148
const size_t kBlockHeader = sizeof(double) > sizeof(void*) ? sizeof(double) : sizeof(void*); // suitable for aligning double & void* on all platforms
149
const size_t kGCOLinkOffset = (sizeof(GCheader) + sizeof(void*) - 1) & ~(sizeof(void*) - 1); // GCO pages contain freelist links after the GC header
150
151
struct SizeClassConfig
152
{
153
int sizeOfClass[kSizeClasses];
154
int8_t classForSize[kMaxSmallSize + 1];
155
int classCount = 0;
156
157
SizeClassConfig()
158
{
159
memset(sizeOfClass, 0, sizeof(sizeOfClass));
160
memset(classForSize, -1, sizeof(classForSize));
161
162
// we use a progressive size class scheme:
163
// - all size classes are aligned by 8b to satisfy pointer alignment requirements
164
// - we first allocate sizes classes in multiples of 8
165
// - after the first cutoff we allocate size classes in multiples of 16
166
// - after the second cutoff we allocate size classes in multiples of 32
167
// - after the third cutoff we allocate size classes in multiples of 64
168
// this balances internal fragmentation vs external fragmentation
169
for (int size = 8; size < 64; size += 8)
170
sizeOfClass[classCount++] = size;
171
172
for (int size = 64; size < 256; size += 16)
173
sizeOfClass[classCount++] = size;
174
175
for (int size = 256; size < 512; size += 32)
176
sizeOfClass[classCount++] = size;
177
178
for (int size = 512; size <= 1024; size += 64)
179
sizeOfClass[classCount++] = size;
180
181
LUAU_ASSERT(size_t(classCount) <= kSizeClasses);
182
183
// fill the lookup table for all classes
184
for (int klass = 0; klass < classCount; ++klass)
185
classForSize[sizeOfClass[klass]] = int8_t(klass);
186
187
// fill the gaps in lookup table
188
for (int size = kMaxSmallSize - 1; size >= 0; --size)
189
if (classForSize[size] < 0)
190
classForSize[size] = classForSize[size + 1];
191
}
192
};
193
194
const SizeClassConfig kSizeClassConfig;
195
196
// size class for a block of size sz; returns -1 for size=0 because empty allocations take no space
197
#define sizeclass(sz) (size_t((sz) - 1) < kMaxSmallSizeUsed ? kSizeClassConfig.classForSize[sz] : -1)
198
199
// metadata for a block is stored in the first pointer of the block
200
#define metadata(block) (*(void**)(block))
201
#define freegcolink(block) (*(void**)((char*)block + kGCOLinkOffset))
202
203
#if defined(LUAU_ASSERTENABLED)
204
#define debugpageset(x) (x)
205
#else
206
#define debugpageset(x) NULL
207
#endif
208
209
struct lua_Page
210
{
211
// list of pages with free blocks
212
lua_Page* prev;
213
lua_Page* next;
214
215
// list of all pages
216
lua_Page* listprev;
217
lua_Page* listnext;
218
219
int pageSize; // page size in bytes, including page header
220
int blockSize; // block size in bytes, including block header (for non-GCO)
221
222
void* freeList; // next free block in this page; linked with metadata()/freegcolink()
223
int freeNext; // next free block offset in this page, in bytes; when negative, freeList is used instead
224
int busyBlocks; // number of blocks allocated out of this page
225
226
// provide additional padding based on current object size to provide 16 byte alignment of data
227
// later static_assert checks that this requirement is held
228
char padding[sizeof(void*) == 8 ? 8 : 12];
229
230
char data[1];
231
};
232
233
static_assert(offsetof(lua_Page, data) % 16 == 0, "data must be 16 byte aligned to provide properly aligned allocation of userdata objects");
234
235
l_noret luaM_toobig(lua_State* L)
236
{
237
luaG_runerror(L, "memory allocation error: block too big");
238
}
239
240
static lua_Page* newpage(lua_State* L, lua_Page** pageset, int pageSize, int blockSize, int blockCount)
241
{
242
global_State* g = L->global;
243
244
LUAU_ASSERT(pageSize - int(offsetof(lua_Page, data)) >= blockSize * blockCount);
245
246
lua_Page* page = (lua_Page*)(*g->frealloc)(g->ud, NULL, 0, pageSize);
247
if (!page)
248
luaD_throw(L, LUA_ERRMEM);
249
250
ASAN_POISON_MEMORY_REGION(page->data, blockSize * blockCount);
251
252
// setup page header
253
page->prev = NULL;
254
page->next = NULL;
255
256
page->listprev = NULL;
257
page->listnext = NULL;
258
259
page->pageSize = pageSize;
260
page->blockSize = blockSize;
261
262
// note: we start with the last block in the page and move downward
263
// either order would work, but that way we don't need to store the block count in the page
264
// additionally, GC stores objects in singly linked lists, and this way the GC lists end up in increasing pointer order
265
page->freeList = NULL;
266
page->freeNext = (blockCount - 1) * blockSize;
267
page->busyBlocks = 0;
268
269
if (pageset)
270
{
271
page->listnext = *pageset;
272
if (page->listnext)
273
page->listnext->listprev = page;
274
*pageset = page;
275
}
276
277
return page;
278
}
279
280
// this is part of a cold path in newblock and newgcoblock
281
// it is marked as noinline to prevent it from being inlined into those functions
282
// if it is inlined, then the compiler may determine those functions are "too big" to be profitably inlined, which results in reduced performance
283
LUAU_NOINLINE static lua_Page* newclasspage(lua_State* L, lua_Page** freepageset, lua_Page** pageset, uint8_t sizeClass, bool storeMetadata)
284
{
285
int sizeOfClass = kSizeClassConfig.sizeOfClass[sizeClass];
286
int pageSize = sizeOfClass > int(kLargePageThreshold) ? kLargePageSize : kSmallPageSize;
287
int blockSize = sizeOfClass + (storeMetadata ? kBlockHeader : 0);
288
int blockCount = (pageSize - offsetof(lua_Page, data)) / blockSize;
289
290
lua_Page* page = newpage(L, pageset, pageSize, blockSize, blockCount);
291
292
// prepend a page to page freelist (which is empty because we only ever allocate a new page when it is!)
293
LUAU_ASSERT(!freepageset[sizeClass]);
294
freepageset[sizeClass] = page;
295
296
return page;
297
}
298
299
static void freepage(lua_State* L, lua_Page** pageset, lua_Page* page)
300
{
301
global_State* g = L->global;
302
303
if (pageset)
304
{
305
// remove page from alllist
306
if (page->listnext)
307
page->listnext->listprev = page->listprev;
308
309
if (page->listprev)
310
page->listprev->listnext = page->listnext;
311
else if (*pageset == page)
312
*pageset = page->listnext;
313
}
314
315
// so long
316
(*g->frealloc)(g->ud, page, page->pageSize, 0);
317
}
318
319
static void freeclasspage(lua_State* L, lua_Page** freepageset, lua_Page** pageset, lua_Page* page, uint8_t sizeClass)
320
{
321
// remove page from freelist
322
if (page->next)
323
page->next->prev = page->prev;
324
325
if (page->prev)
326
page->prev->next = page->next;
327
else if (freepageset[sizeClass] == page)
328
freepageset[sizeClass] = page->next;
329
330
freepage(L, pageset, page);
331
}
332
333
static void* newblock(lua_State* L, int sizeClass)
334
{
335
global_State* g = L->global;
336
lua_Page* page = g->freepages[sizeClass];
337
338
// slow path: no page in the freelist, allocate a new one
339
if (!page)
340
page = newclasspage(L, g->freepages, debugpageset(&g->allpages), sizeClass, true);
341
342
LUAU_ASSERT(!page->prev);
343
LUAU_ASSERT(page->freeList || page->freeNext >= 0);
344
LUAU_ASSERT(size_t(page->blockSize) == kSizeClassConfig.sizeOfClass[sizeClass] + kBlockHeader);
345
346
void* block;
347
348
if (page->freeNext >= 0)
349
{
350
block = &page->data + page->freeNext;
351
ASAN_UNPOISON_MEMORY_REGION(block, page->blockSize);
352
353
page->freeNext -= page->blockSize;
354
page->busyBlocks++;
355
}
356
else
357
{
358
block = page->freeList;
359
ASAN_UNPOISON_MEMORY_REGION(block, page->blockSize);
360
361
page->freeList = metadata(block);
362
page->busyBlocks++;
363
}
364
365
// the first word in a block point back to the page
366
metadata(block) = page;
367
368
// if we allocate the last block out of a page, we need to remove it from free list
369
if (!page->freeList && page->freeNext < 0)
370
{
371
g->freepages[sizeClass] = page->next;
372
if (page->next)
373
page->next->prev = NULL;
374
page->next = NULL;
375
}
376
377
// the user data is right after the metadata
378
return (char*)block + kBlockHeader;
379
}
380
381
static void* newgcoblock(lua_State* L, int sizeClass)
382
{
383
global_State* g = L->global;
384
lua_Page* page = g->freegcopages[sizeClass];
385
386
// slow path: no page in the freelist, allocate a new one
387
if (!page)
388
page = newclasspage(L, g->freegcopages, &g->allgcopages, sizeClass, false);
389
390
LUAU_ASSERT(!page->prev);
391
LUAU_ASSERT(page->freeList || page->freeNext >= 0);
392
LUAU_ASSERT(page->blockSize == kSizeClassConfig.sizeOfClass[sizeClass]);
393
394
void* block;
395
396
if (page->freeNext >= 0)
397
{
398
block = &page->data + page->freeNext;
399
ASAN_UNPOISON_MEMORY_REGION(block, page->blockSize);
400
401
page->freeNext -= page->blockSize;
402
page->busyBlocks++;
403
}
404
else
405
{
406
block = page->freeList;
407
ASAN_UNPOISON_MEMORY_REGION((char*)block + sizeof(GCheader), page->blockSize - sizeof(GCheader));
408
409
// when separate block metadata is not used, free list link is stored inside the block data itself
410
page->freeList = freegcolink(block);
411
page->busyBlocks++;
412
}
413
414
// if we allocate the last block out of a page, we need to remove it from free list
415
if (!page->freeList && page->freeNext < 0)
416
{
417
g->freegcopages[sizeClass] = page->next;
418
if (page->next)
419
page->next->prev = NULL;
420
page->next = NULL;
421
}
422
423
return block;
424
}
425
426
static void freeblock(lua_State* L, int sizeClass, void* block)
427
{
428
global_State* g = L->global;
429
430
// the user data is right after the metadata
431
LUAU_ASSERT(block);
432
block = (char*)block - kBlockHeader;
433
434
lua_Page* page = (lua_Page*)metadata(block);
435
LUAU_ASSERT(page && page->busyBlocks > 0);
436
LUAU_ASSERT(size_t(page->blockSize) == kSizeClassConfig.sizeOfClass[sizeClass] + kBlockHeader);
437
LUAU_ASSERT(block >= page->data && block < (char*)page + page->pageSize);
438
439
// if the page wasn't in the page free list, it should be now since it got a block!
440
if (!page->freeList && page->freeNext < 0)
441
{
442
LUAU_ASSERT(!page->prev);
443
LUAU_ASSERT(!page->next);
444
445
page->next = g->freepages[sizeClass];
446
if (page->next)
447
page->next->prev = page;
448
g->freepages[sizeClass] = page;
449
}
450
451
// add the block to the free list inside the page
452
metadata(block) = page->freeList;
453
page->freeList = block;
454
455
ASAN_POISON_MEMORY_REGION(block, page->blockSize);
456
457
page->busyBlocks--;
458
459
// if it's the last block in the page, we don't need the page
460
if (page->busyBlocks == 0)
461
freeclasspage(L, g->freepages, debugpageset(&g->allpages), page, sizeClass);
462
}
463
464
static void freegcoblock(lua_State* L, int sizeClass, void* block, lua_Page* page)
465
{
466
LUAU_ASSERT(page && page->busyBlocks > 0);
467
LUAU_ASSERT(page->blockSize == kSizeClassConfig.sizeOfClass[sizeClass]);
468
LUAU_ASSERT(block >= page->data && block < (char*)page + page->pageSize);
469
470
global_State* g = L->global;
471
472
// if the page wasn't in the page free list, it should be now since it got a block!
473
if (!page->freeList && page->freeNext < 0)
474
{
475
LUAU_ASSERT(!page->prev);
476
LUAU_ASSERT(!page->next);
477
478
page->next = g->freegcopages[sizeClass];
479
if (page->next)
480
page->next->prev = page;
481
g->freegcopages[sizeClass] = page;
482
}
483
484
// when separate block metadata is not used, free list link is stored inside the block data itself
485
freegcolink(block) = page->freeList;
486
page->freeList = block;
487
488
ASAN_POISON_MEMORY_REGION((char*)block + sizeof(GCheader), page->blockSize - sizeof(GCheader));
489
490
page->busyBlocks--;
491
492
// if it's the last block in the page, we don't need the page
493
if (page->busyBlocks == 0)
494
freeclasspage(L, g->freegcopages, &g->allgcopages, page, sizeClass);
495
}
496
497
void* luaM_new_(lua_State* L, size_t nsize, uint8_t memcat)
498
{
499
global_State* g = L->global;
500
501
int nclass = sizeclass(nsize);
502
503
void* block = nclass >= 0 ? newblock(L, nclass) : (*g->frealloc)(g->ud, NULL, 0, nsize);
504
if (block == NULL && nsize > 0)
505
luaD_throw(L, LUA_ERRMEM);
506
507
g->totalbytes += nsize;
508
g->memcatbytes[memcat] += nsize;
509
510
if (LUAU_UNLIKELY(!!g->cb.onallocate))
511
{
512
g->cb.onallocate(L, 0, nsize);
513
}
514
515
return block;
516
}
517
518
GCObject* luaM_newgco_(lua_State* L, size_t nsize, uint8_t memcat)
519
{
520
// we need to accommodate space for link for free blocks (freegcolink)
521
LUAU_ASSERT(nsize >= kGCOLinkOffset + sizeof(void*));
522
523
global_State* g = L->global;
524
525
int nclass = sizeclass(nsize);
526
527
void* block = NULL;
528
529
if (nclass >= 0)
530
{
531
block = newgcoblock(L, nclass);
532
}
533
else
534
{
535
lua_Page* page = newpage(L, &g->allgcopages, offsetof(lua_Page, data) + int(nsize), int(nsize), 1);
536
537
block = &page->data;
538
ASAN_UNPOISON_MEMORY_REGION(block, page->blockSize);
539
540
page->freeNext -= page->blockSize;
541
page->busyBlocks++;
542
}
543
544
if (block == NULL && nsize > 0)
545
luaD_throw(L, LUA_ERRMEM);
546
547
g->totalbytes += nsize;
548
g->memcatbytes[memcat] += nsize;
549
550
if (LUAU_UNLIKELY(!!g->cb.onallocate))
551
{
552
g->cb.onallocate(L, 0, nsize);
553
}
554
555
return (GCObject*)block;
556
}
557
558
void luaM_free_(lua_State* L, void* block, size_t osize, uint8_t memcat)
559
{
560
global_State* g = L->global;
561
LUAU_ASSERT((osize == 0) == (block == NULL));
562
563
int oclass = sizeclass(osize);
564
565
if (oclass >= 0)
566
freeblock(L, oclass, block);
567
else
568
(*g->frealloc)(g->ud, block, osize, 0);
569
570
g->totalbytes -= osize;
571
g->memcatbytes[memcat] -= osize;
572
}
573
574
void luaM_freegco_(lua_State* L, GCObject* block, size_t osize, uint8_t memcat, lua_Page* page)
575
{
576
global_State* g = L->global;
577
LUAU_ASSERT((osize == 0) == (block == NULL));
578
579
int oclass = sizeclass(osize);
580
581
if (oclass >= 0)
582
{
583
block->gch.tt = LUA_TNIL;
584
585
freegcoblock(L, oclass, block, page);
586
}
587
else
588
{
589
LUAU_ASSERT(page->busyBlocks == 1);
590
LUAU_ASSERT(size_t(page->blockSize) == osize);
591
LUAU_ASSERT((void*)block == page->data);
592
593
freepage(L, &g->allgcopages, page);
594
}
595
596
g->totalbytes -= osize;
597
g->memcatbytes[memcat] -= osize;
598
}
599
600
void* luaM_realloc_(lua_State* L, void* block, size_t osize, size_t nsize, uint8_t memcat)
601
{
602
global_State* g = L->global;
603
LUAU_ASSERT((osize == 0) == (block == NULL));
604
605
int nclass = sizeclass(nsize);
606
int oclass = sizeclass(osize);
607
void* result;
608
609
// if either block needs to be allocated using a block allocator, we can't use realloc directly
610
if (nclass >= 0 || oclass >= 0)
611
{
612
result = nclass >= 0 ? newblock(L, nclass) : (*g->frealloc)(g->ud, NULL, 0, nsize);
613
if (result == NULL && nsize > 0)
614
luaD_throw(L, LUA_ERRMEM);
615
616
if (osize > 0 && nsize > 0)
617
memcpy(result, block, osize < nsize ? osize : nsize);
618
619
if (oclass >= 0)
620
freeblock(L, oclass, block);
621
else
622
(*g->frealloc)(g->ud, block, osize, 0);
623
}
624
else
625
{
626
result = (*g->frealloc)(g->ud, block, osize, nsize);
627
if (result == NULL && nsize > 0)
628
luaD_throw(L, LUA_ERRMEM);
629
}
630
631
LUAU_ASSERT((nsize == 0) == (result == NULL));
632
g->totalbytes = (g->totalbytes - osize) + nsize;
633
g->memcatbytes[memcat] += nsize - osize;
634
635
if (LUAU_UNLIKELY(!!g->cb.onallocate))
636
{
637
g->cb.onallocate(L, osize, nsize);
638
}
639
640
return result;
641
}
642
643
void luaM_getpagewalkinfo(lua_Page* page, char** start, char** end, int* busyBlocks, int* blockSize)
644
{
645
int blockCount = (page->pageSize - offsetof(lua_Page, data)) / page->blockSize;
646
647
LUAU_ASSERT(page->freeNext >= -page->blockSize && page->freeNext <= (blockCount - 1) * page->blockSize);
648
649
char* data = page->data; // silences ubsan when indexing page->data
650
651
*start = data + page->freeNext + page->blockSize;
652
*end = data + blockCount * page->blockSize;
653
*busyBlocks = page->busyBlocks;
654
*blockSize = page->blockSize;
655
}
656
657
void luaM_getpageinfo(lua_Page* page, int* pageBlocks, int* busyBlocks, int* blockSize, int* pageSize)
658
{
659
*pageBlocks = (page->pageSize - offsetof(lua_Page, data)) / page->blockSize;
660
*busyBlocks = page->busyBlocks;
661
*blockSize = page->blockSize;
662
*pageSize = page->pageSize;
663
}
664
665
lua_Page* luaM_getnextpage(lua_Page* page)
666
{
667
return page->listnext;
668
}
669
670
void luaM_visitpage(lua_Page* page, void* context, bool (*visitor)(void* context, lua_Page* page, GCObject* gco))
671
{
672
char* start;
673
char* end;
674
int busyBlocks;
675
int blockSize;
676
luaM_getpagewalkinfo(page, &start, &end, &busyBlocks, &blockSize);
677
678
for (char* pos = start; pos != end; pos += blockSize)
679
{
680
GCObject* gco = (GCObject*)pos;
681
682
// skip memory blocks that are already freed
683
if (gco->gch.tt == LUA_TNIL)
684
continue;
685
686
// when true is returned it means that the element was deleted
687
if (visitor(context, page, gco))
688
{
689
LUAU_ASSERT(busyBlocks > 0);
690
691
// if the last block was removed, page would be removed as well
692
if (--busyBlocks == 0)
693
break;
694
}
695
}
696
}
697
698
void luaM_visitgco(lua_State* L, void* context, bool (*visitor)(void* context, lua_Page* page, GCObject* gco))
699
{
700
global_State* g = L->global;
701
702
for (lua_Page* curr = g->allgcopages; curr;)
703
{
704
lua_Page* next = curr->listnext; // block visit might destroy the page
705
706
luaM_visitpage(curr, context, visitor);
707
708
curr = next;
709
}
710
}
711
712