Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Roblox
GitHub Repository: Roblox/luau
Path: blob/master/VM/src/lgc.cpp
2725 views
1
// This file is part of the Luau programming language and is licensed under MIT License; see LICENSE.txt for details
2
// This code is based on Lua 5.x implementation licensed under MIT License; see lua_LICENSE.txt for details
3
#include "lgc.h"
4
5
#include "lobject.h"
6
#include "lstate.h"
7
#include "ltable.h"
8
#include "lfunc.h"
9
#include "lstring.h"
10
#include "ldo.h"
11
#include "lmem.h"
12
#include "ludata.h"
13
#include "lbuffer.h"
14
15
#include <string.h>
16
17
/*
18
* Luau uses an incremental non-generational non-moving mark&sweep garbage collector.
19
*
20
* The collector runs in three stages: mark, atomic and sweep. Mark and sweep are incremental and try to do a limited amount
21
* of work every GC step; atomic is ran once per the GC cycle and is indivisible. In either case, the work happens during GC
22
* steps that are "scheduled" by the GC pacing algorithm - the steps happen either from explicit calls to lua_gc, or after
23
* the mutator (aka application) allocates some amount of memory, which is known as "GC assist". In either case, GC steps
24
* can't happen concurrently with other access to VM state.
25
*
26
* Current GC stage is stored in global_State::gcstate, and has two additional stages for pause and second-phase mark, explained below.
27
*
28
* GC pacer is an algorithm that tries to ensure that GC can always catch up to the application allocating garbage, but do this
29
* with minimal amount of effort. To configure the pacer Luau provides control over three variables: GC goal, defined as the
30
* target heap size during atomic phase in relation to live heap size (e.g. 200% goal means the heap's worst case size is double
31
* the total size of alive objects), step size (how many kilobytes should the application allocate for GC step to trigger), and
32
* GC multiplier (how much should the GC try to mark relative to how much the application allocated). It's critical that step
33
* multiplier is significantly above 1, as this is what allows the GC to catch up to the application's allocation rate, and
34
* GC goal and GC multiplier are linked in subtle ways, described in lua.h comments for LUA_GCSETGOAL.
35
*
36
* During mark, GC tries to identify all reachable objects and mark them as reachable, while keeping unreachable objects unmarked.
37
* During sweep, GC tries to sweep all objects that were not reachable at the end of mark. The atomic phase is needed to ensure
38
* that all pending marking has completed and all objects that are still marked as unreachable are, in fact, unreachable.
39
*
40
* Notably, during mark GC doesn't free any objects, and so the heap size constantly grows; during sweep, GC doesn't do any marking
41
* work, so it can't immediately free objects that became unreachable after sweeping started.
42
*
43
* Every collectable object has one of three colors at any given point in time: white, gray or black. This coloring scheme
44
* is necessary to implement incremental marking: white objects have not been marked and may be unreachable, black objects
45
* have been marked and will not be marked again if they stay black, and gray objects have been marked but may contain unmarked
46
* references.
47
*
48
* Objects are allocated as white; however, during sweep, we need to differentiate between objects that remained white in the mark
49
* phase (these are not reachable and can be freed) and objects that were allocated after the mark phase ended. Because of this, the
50
* colors are encoded using three bits inside GCheader::marked: white0, white1 and black (so technically we use a four-color scheme:
51
* any object can be white0, white1, gray or black). All bits are exclusive, and gray objects have all three bits unset. This allows
52
* us to have the "current" white bit, which is flipped during atomic stage - during sweeping, objects that have the white color from
53
* the previous mark may be deleted, and all other objects may or may not be reachable, and will be changed to the current white color,
54
* so that the next mark can start coloring objects from scratch again.
55
*
56
* Crucially, the coloring scheme comes with what's known as a tri-color invariant: a black object may never point to a white object.
57
*
58
* At the end of atomic stage, the expectation is that there are no gray objects anymore, which means all objects are either black
59
* (reachable) or white (unreachable = dead). Tri-color invariant is maintained throughout mark and atomic phase. To uphold this
60
* invariant, every modification of an object needs to check if the object is black and the new referent is white; if so, we
61
* need to either mark the referent, making it non-white (known as a forward barrier), or mark the object as gray and queue it
62
* for additional marking (known as a backward barrier).
63
*
64
* Luau uses both types of barriers. Forward barriers advance GC progress, since they don't create new outstanding work for GC,
65
* but they may be expensive when an object is modified many times in succession. Backward barriers are cheaper, as they defer
66
* most of the work until "later", but they require queueing the object for a rescan which isn't always possible. Table writes usually
67
* use backward barriers (but switch to forward barriers during second-phase mark), whereas upvalue writes and setmetatable use forward
68
* barriers.
69
*
70
* Since marking is incremental, it needs a way to track progress, which is implemented as a gray set: at any point, objects that
71
* are gray need to mark their white references, objects that are black have no pending work, and objects that are white have not yet
72
* been reached. Once the gray set is empty, the work completes; as such, incremental marking is as simple as removing an object from
73
* the gray set, and turning it to black (which requires turning all its white references to gray). The gray set is implemented as
74
* an intrusive singly linked list, using `gclist` field in multiple objects (functions, tables, threads and protos). When an object
75
* doesn't have gclist field, the marking of that object needs to be "immediate", changing the colors of all references in one go.
76
*
77
* When a black object is modified, it needs to become gray again. Objects like this are placed on a separate `grayagain` list by a
78
* barrier - this is important because it allows us to have a mark stage that terminates when the gray set is empty even if the mutator
79
* is constantly changing existing objects to gray. After mark stage finishes traversing `gray` list, we copy `grayagain` list to `gray`
80
* once and incrementally mark it again. During this phase of marking, we may get more objects marked as `grayagain`, so after we finish
81
* emptying out the `gray` list the second time, we finish the mark stage and do final marking of `grayagain` during atomic phase.
82
* GC works correctly without this second-phase mark (called GCSpropagateagain), but it reduces the time spent during atomic phase.
83
*
84
* Sweeping is also incremental, but instead of working at a granularity of an object, it works at a granularity of a page: all GC
85
* objects are allocated in special pages (see lmem.cpp for details), and sweeper traverses all objects in one page in one incremental
86
* step, freeing objects that aren't reachable (old white), and recoloring all other objects with the new white to prepare them for next
87
* mark. During sweeping we don't need to maintain the GC invariant, because our goal is to paint all objects with current white -
88
* however, some barriers will still trigger (because some reachable objects are still black as sweeping didn't get to them yet), and
89
* some barriers will proactively mark black objects as white to avoid extra barriers from triggering excessively.
90
*
91
* Most references that GC deals with are strong, and as such they fit neatly into the incremental marking scheme. Some, however, are
92
* weak - notably, tables can be marked as having weak keys/values (using __mode metafield). During incremental marking, we don't know
93
* for certain if a given object is alive - if it's marked as black, it definitely was reachable during marking, but if it's marked as
94
* white, we don't know if it's actually unreachable. Because of this, we need to defer weak table handling to the atomic phase; after
95
* all objects are marked, we traverse all weak tables (that are linked into special weak table lists using `gclist` during marking),
96
* and remove all entries that have white keys or values. If keys or values are strong, they are marked normally.
97
*
98
* The simplified scheme described above isn't fully accurate because of threads, upvalues and strings.
99
*
100
* Strings are semantically black (they are initially white, and when the mark stage reaches a string, it changes its color and never
101
* touches the object again), but they are technically marked as gray - the black bit is never set on a string object. This behavior
102
* is inherited from Lua 5.1 GC, but doesn't have a clear rationale - effectively, strings are marked as gray but are never part of
103
* a gray list.
104
*
105
* Threads are hard to deal with because for them to fit into the white-gray-black scheme, writes to thread stacks need to have barriers
106
* that turn the thread from black (already scanned) to gray - but this is very expensive because stack writes are very common. To
107
* get around this problem, threads have an "active" state which means that a thread is actively executing code. When GC reaches an active
108
* thread, it keeps it as gray, and rescans it during atomic phase. When a thread is inactive, GC instead paints the thread black. All
109
* API calls that can write to thread stacks outside of execution (which implies active) uses a thread barrier that checks if the thread is
110
* black, and if it is it marks it as gray and puts it on a gray list to be rescanned during atomic phase.
111
*
112
* Upvalues are special objects that can be closed, in which case they contain the value (acting as a reference cell) and can be dealt
113
* with using the regular algorithm, or open, in which case they refer to a stack slot in some other thread. These are difficult to deal
114
* with because the stack writes are not monitored. Because of this open upvalues are treated in a somewhat special way: they are never marked
115
* as black (doing so would violate the GC invariant), and they are kept in a special global list (global_State::uvhead) which is traversed
116
* during atomic phase. This is needed because an open upvalue might point to a stack location in a dead thread that never marked the stack
117
* slot - upvalues like this are identified since they don't have `markedopen` bit set during thread traversal and closed in `clearupvals`.
118
*/
119
120
#define GC_SWEEPPAGESTEPCOST 16
121
122
#define GC_INTERRUPT(state) \
123
{ \
124
void (*interrupt)(lua_State*, int) = g->cb.interrupt; \
125
if (LUAU_UNLIKELY(!!interrupt)) \
126
interrupt(L, state); \
127
}
128
129
#define maskmarks cast_byte(~(bitmask(BLACKBIT) | WHITEBITS))
130
131
#define makewhite(g, x) ((x)->gch.marked = cast_byte(((x)->gch.marked & maskmarks) | luaC_white(g)))
132
133
#define white2gray(x) reset2bits((x)->gch.marked, WHITE0BIT, WHITE1BIT)
134
#define black2gray(x) resetbit((x)->gch.marked, BLACKBIT)
135
136
#define stringmark(s) reset2bits((s)->marked, WHITE0BIT, WHITE1BIT)
137
138
#define markvalue(g, o) \
139
{ \
140
checkconsistency(o); \
141
if (iscollectable(o) && iswhite(gcvalue(o))) \
142
reallymarkobject(g, gcvalue(o)); \
143
}
144
145
#define markobject(g, t) \
146
{ \
147
if (iswhite(obj2gco(t))) \
148
reallymarkobject(g, obj2gco(t)); \
149
}
150
151
#ifdef LUAI_GCMETRICS
152
static void recordGcStateStep(global_State* g, int startgcstate, double seconds, bool assist, size_t work)
153
{
154
switch (startgcstate)
155
{
156
case GCSpause:
157
// record root mark time if we have switched to next state
158
if (g->gcstate == GCSpropagate)
159
{
160
g->gcmetrics.currcycle.marktime += seconds;
161
162
if (assist)
163
g->gcmetrics.currcycle.markassisttime += seconds;
164
}
165
break;
166
case GCSpropagate:
167
case GCSpropagateagain:
168
g->gcmetrics.currcycle.marktime += seconds;
169
g->gcmetrics.currcycle.markwork += work;
170
171
if (assist)
172
g->gcmetrics.currcycle.markassisttime += seconds;
173
break;
174
case GCSatomic:
175
g->gcmetrics.currcycle.atomictime += seconds;
176
break;
177
case GCSsweep:
178
g->gcmetrics.currcycle.sweeptime += seconds;
179
g->gcmetrics.currcycle.sweepwork += work;
180
181
if (assist)
182
g->gcmetrics.currcycle.sweepassisttime += seconds;
183
break;
184
default:
185
LUAU_ASSERT(!"Unexpected GC state");
186
}
187
188
if (assist)
189
{
190
g->gcmetrics.stepassisttimeacc += seconds;
191
g->gcmetrics.currcycle.assistwork += work;
192
}
193
else
194
{
195
g->gcmetrics.stepexplicittimeacc += seconds;
196
g->gcmetrics.currcycle.explicitwork += work;
197
}
198
}
199
200
static double recordGcDeltaTime(double& timer)
201
{
202
double now = lua_clock();
203
double delta = now - timer;
204
timer = now;
205
return delta;
206
}
207
208
static void startGcCycleMetrics(global_State* g)
209
{
210
g->gcmetrics.currcycle.starttimestamp = lua_clock();
211
g->gcmetrics.currcycle.pausetime = g->gcmetrics.currcycle.starttimestamp - g->gcmetrics.lastcycle.endtimestamp;
212
}
213
214
static void finishGcCycleMetrics(global_State* g)
215
{
216
g->gcmetrics.currcycle.endtimestamp = lua_clock();
217
g->gcmetrics.currcycle.endtotalsizebytes = g->totalbytes;
218
219
g->gcmetrics.completedcycles++;
220
g->gcmetrics.lastcycle = g->gcmetrics.currcycle;
221
g->gcmetrics.currcycle = GCCycleMetrics();
222
223
g->gcmetrics.currcycle.starttotalsizebytes = g->totalbytes;
224
g->gcmetrics.currcycle.heaptriggersizebytes = g->GCthreshold;
225
}
226
#endif
227
228
static void removeentry(LuaNode* n)
229
{
230
LUAU_ASSERT(ttisnil(gval(n)));
231
if (iscollectable(gkey(n)))
232
setttype(gkey(n), LUA_TDEADKEY); // dead key; remove it
233
}
234
235
static void reallymarkobject(global_State* g, GCObject* o)
236
{
237
LUAU_ASSERT(iswhite(o) && !isdead(g, o));
238
white2gray(o);
239
switch (o->gch.tt)
240
{
241
case LUA_TSTRING:
242
{
243
return;
244
}
245
case LUA_TUSERDATA:
246
{
247
LuaTable* mt = gco2u(o)->metatable;
248
gray2black(o); // udata are never gray
249
if (mt)
250
markobject(g, mt);
251
return;
252
}
253
case LUA_TUPVAL:
254
{
255
UpVal* uv = gco2uv(o);
256
markvalue(g, uv->v);
257
if (!upisopen(uv)) // closed?
258
gray2black(o); // open upvalues are never black
259
return;
260
}
261
case LUA_TFUNCTION:
262
{
263
gco2cl(o)->gclist = g->gray;
264
g->gray = o;
265
break;
266
}
267
case LUA_TTABLE:
268
{
269
gco2h(o)->gclist = g->gray;
270
g->gray = o;
271
break;
272
}
273
case LUA_TTHREAD:
274
{
275
gco2th(o)->gclist = g->gray;
276
g->gray = o;
277
break;
278
}
279
case LUA_TBUFFER:
280
{
281
gray2black(o); // buffers are never gray
282
return;
283
}
284
case LUA_TPROTO:
285
{
286
gco2p(o)->gclist = g->gray;
287
g->gray = o;
288
break;
289
}
290
default:
291
LUAU_ASSERT(0);
292
}
293
}
294
295
static const char* gettablemode(global_State* g, LuaTable* h)
296
{
297
const TValue* mode = gfasttm(g, h->metatable, TM_MODE);
298
299
if (mode && ttisstring(mode))
300
return svalue(mode);
301
302
return NULL;
303
}
304
305
static int traversetable(global_State* g, LuaTable* h)
306
{
307
int i;
308
int weakkey = 0;
309
int weakvalue = 0;
310
if (h->metatable)
311
markobject(g, cast_to(LuaTable*, h->metatable));
312
313
// is there a weak mode?
314
if (const char* modev = gettablemode(g, h))
315
{
316
weakkey = (strchr(modev, 'k') != NULL);
317
weakvalue = (strchr(modev, 'v') != NULL);
318
if (weakkey || weakvalue)
319
{ // is really weak?
320
h->gclist = g->weak; // must be cleared after GC, ...
321
g->weak = obj2gco(h); // ... so put in the appropriate list
322
}
323
}
324
325
if (weakkey && weakvalue)
326
return 1;
327
if (!weakvalue)
328
{
329
i = h->sizearray;
330
while (i--)
331
markvalue(g, &h->array[i]);
332
}
333
i = sizenode(h);
334
while (i--)
335
{
336
LuaNode* n = gnode(h, i);
337
LUAU_ASSERT(ttype(gkey(n)) != LUA_TDEADKEY || ttisnil(gval(n)));
338
if (ttisnil(gval(n)))
339
removeentry(n); // remove empty entries
340
else
341
{
342
LUAU_ASSERT(!ttisnil(gkey(n)));
343
if (!weakkey)
344
markvalue(g, gkey(n));
345
if (!weakvalue)
346
markvalue(g, gval(n));
347
}
348
}
349
return weakkey || weakvalue;
350
}
351
352
/*
353
** All marks are conditional because a GC may happen while the
354
** prototype is still being created
355
*/
356
static void traverseproto(global_State* g, Proto* f)
357
{
358
int i;
359
if (f->source)
360
stringmark(f->source);
361
if (f->debugname)
362
stringmark(f->debugname);
363
for (i = 0; i < f->sizek; i++) // mark literals
364
markvalue(g, &f->k[i]);
365
for (i = 0; i < f->sizeupvalues; i++)
366
{ // mark upvalue names
367
if (f->upvalues[i])
368
stringmark(f->upvalues[i]);
369
}
370
for (i = 0; i < f->sizep; i++)
371
{ // mark nested protos
372
if (f->p[i])
373
markobject(g, f->p[i]);
374
}
375
for (i = 0; i < f->sizelocvars; i++)
376
{ // mark local-variable names
377
if (f->locvars[i].varname)
378
stringmark(f->locvars[i].varname);
379
}
380
}
381
382
static void traverseclosure(global_State* g, Closure* cl)
383
{
384
markobject(g, cl->env);
385
if (cl->isC)
386
{
387
int i;
388
for (i = 0; i < cl->nupvalues; i++) // mark its upvalues
389
markvalue(g, &cl->c.upvals[i]);
390
}
391
else
392
{
393
int i;
394
LUAU_ASSERT(cl->nupvalues == cl->l.p->nups);
395
markobject(g, cast_to(Proto*, cl->l.p));
396
for (i = 0; i < cl->nupvalues; i++) // mark its upvalues
397
markvalue(g, &cl->l.uprefs[i]);
398
}
399
}
400
401
static void traversestack(global_State* g, lua_State* l)
402
{
403
markobject(g, l->gt);
404
if (l->namecall)
405
stringmark(l->namecall);
406
for (StkId o = l->stack; o < l->top; o++)
407
markvalue(g, o);
408
for (UpVal* uv = l->openupval; uv; uv = uv->u.open.threadnext)
409
{
410
LUAU_ASSERT(upisopen(uv));
411
uv->markedopen = 1;
412
markobject(g, uv);
413
}
414
}
415
416
static void clearstack(lua_State* l)
417
{
418
StkId stack_end = l->stack + l->stacksize;
419
for (StkId o = l->top; o < stack_end; o++) // clear not-marked stack slice
420
setnilvalue(o);
421
}
422
423
static void shrinkstack(lua_State* L)
424
{
425
// compute used stack - note that we can't use th->top if we're in the middle of vararg call
426
StkId lim = L->top;
427
for (CallInfo* ci = L->base_ci; ci <= L->ci; ci++)
428
{
429
LUAU_ASSERT(ci->top <= L->stack_last);
430
if (lim < ci->top)
431
lim = ci->top;
432
}
433
434
// shrink stack and callinfo arrays if we aren't using most of the space
435
int ci_used = cast_int(L->ci - L->base_ci); // number of `ci' in use
436
int s_used = cast_int(lim - L->stack); // part of stack in use
437
if (L->size_ci > LUAI_MAXCALLS) // handling overflow?
438
return; // do not touch the stacks
439
440
if (3 * size_t(ci_used) < size_t(L->size_ci) && 2 * BASIC_CI_SIZE < L->size_ci)
441
luaD_reallocCI(L, L->size_ci / 2); // still big enough...
442
condhardstacktests(luaD_reallocCI(L, ci_used + 1));
443
444
if (3 * size_t(s_used) < size_t(L->stacksize) && 2 * (BASIC_STACK_SIZE + EXTRA_STACK) < L->stacksize)
445
luaD_reallocstack(L, L->stacksize / 2, 0); // still big enough...
446
condhardstacktests(luaD_reallocstack(L, s_used, 0));
447
}
448
449
static void shrinkstackprotected(lua_State* L)
450
{
451
struct CallContext
452
{
453
static void run(lua_State* L, void* ud)
454
{
455
shrinkstack(L);
456
}
457
} ctx = {};
458
459
// the resize call can fail on exception, in which case we will continue with original size
460
int status = luaD_rawrunprotected(L, &CallContext::run, &ctx);
461
LUAU_ASSERT(status == LUA_OK || status == LUA_ERRMEM);
462
}
463
464
/*
465
** traverse one gray object, turning it to black.
466
** Returns `quantity' traversed.
467
*/
468
static size_t propagatemark(global_State* g)
469
{
470
GCObject* o = g->gray;
471
LUAU_ASSERT(isgray(o));
472
gray2black(o);
473
switch (o->gch.tt)
474
{
475
case LUA_TTABLE:
476
{
477
LuaTable* h = gco2h(o);
478
g->gray = h->gclist;
479
if (traversetable(g, h)) // table is weak?
480
black2gray(o); // keep it gray
481
return sizeof(LuaTable) + sizeof(TValue) * h->sizearray + sizeof(LuaNode) * sizenode(h);
482
}
483
case LUA_TFUNCTION:
484
{
485
Closure* cl = gco2cl(o);
486
g->gray = cl->gclist;
487
traverseclosure(g, cl);
488
return cl->isC ? sizeCclosure(cl->nupvalues) : sizeLclosure(cl->nupvalues);
489
}
490
case LUA_TTHREAD:
491
{
492
lua_State* th = gco2th(o);
493
g->gray = th->gclist;
494
495
bool active = th->isactive || th == th->global->mainthread;
496
497
traversestack(g, th);
498
499
// active threads will need to be rescanned later to mark new stack writes so we mark them gray again
500
if (active)
501
{
502
th->gclist = g->grayagain;
503
g->grayagain = o;
504
505
black2gray(o);
506
}
507
508
// the stack needs to be cleared after the last modification of the thread state before sweep begins
509
// if the thread is inactive, we might not see the thread in this cycle so we must clear it now
510
if (!active || g->gcstate == GCSatomic)
511
clearstack(th);
512
513
// we could shrink stack at any time but we opt to do it during initial mark to do that just once per cycle
514
if (g->gcstate == GCSpropagate)
515
shrinkstackprotected(th);
516
517
return sizeof(lua_State) + sizeof(TValue) * th->stacksize + sizeof(CallInfo) * th->size_ci;
518
}
519
case LUA_TPROTO:
520
{
521
Proto* p = gco2p(o);
522
g->gray = p->gclist;
523
traverseproto(g, p);
524
525
return sizeof(Proto) + sizeof(Instruction) * p->sizecode + sizeof(Proto*) * p->sizep + sizeof(TValue) * p->sizek + p->sizelineinfo +
526
sizeof(LocVar) * p->sizelocvars + sizeof(TString*) * p->sizeupvalues + p->sizetypeinfo;
527
}
528
default:
529
LUAU_ASSERT(0);
530
return 0;
531
}
532
}
533
534
static size_t propagateall(global_State* g)
535
{
536
size_t work = 0;
537
while (g->gray)
538
{
539
work += propagatemark(g);
540
}
541
return work;
542
}
543
544
/*
545
** The next function tells whether a key or value can be cleared from
546
** a weak table. Non-collectable objects are never removed from weak
547
** tables. Strings behave as `values', so are never removed too. for
548
** other objects: if really collected, cannot keep them.
549
*/
550
static int isobjcleared(GCObject* o)
551
{
552
if (o->gch.tt == LUA_TSTRING)
553
{
554
stringmark(&o->ts); // strings are `values', so are never weak
555
return 0;
556
}
557
558
return iswhite(o);
559
}
560
561
#define iscleared(o) (iscollectable(o) && isobjcleared(gcvalue(o)))
562
563
static void tableresizeprotected(lua_State* L, LuaTable* t, int nhsize)
564
{
565
struct CallContext
566
{
567
LuaTable* t;
568
int nhsize;
569
570
static void run(lua_State* L, void* ud)
571
{
572
CallContext* ctx = (CallContext*)ud;
573
574
luaH_resizehash(L, ctx->t, ctx->nhsize);
575
}
576
} ctx = {t, nhsize};
577
578
// the resize call can fail on exception, in which case we will continue with original size
579
int status = luaD_rawrunprotected(L, &CallContext::run, &ctx);
580
LUAU_ASSERT(status == LUA_OK || status == LUA_ERRMEM);
581
}
582
583
/*
584
** clear collected entries from weaktables
585
*/
586
static size_t cleartable(lua_State* L, GCObject* l)
587
{
588
size_t work = 0;
589
while (l)
590
{
591
LuaTable* h = gco2h(l);
592
work += sizeof(LuaTable) + sizeof(TValue) * h->sizearray + sizeof(LuaNode) * sizenode(h);
593
594
int i = h->sizearray;
595
while (i--)
596
{
597
TValue* o = &h->array[i];
598
if (iscleared(o)) // value was collected?
599
setnilvalue(o); // remove value
600
}
601
i = sizenode(h);
602
int activevalues = 0;
603
while (i--)
604
{
605
LuaNode* n = gnode(h, i);
606
607
// non-empty entry?
608
if (!ttisnil(gval(n)))
609
{
610
// can we clear key or value?
611
if (iscleared(gkey(n)) || iscleared(gval(n)))
612
{
613
setnilvalue(gval(n)); // remove value ...
614
removeentry(n); // remove entry from table
615
}
616
else
617
{
618
activevalues++;
619
}
620
}
621
}
622
623
if (const char* modev = gettablemode(L->global, h))
624
{
625
// are we allowed to shrink this weak table?
626
if (strchr(modev, 's'))
627
{
628
// shrink at 37.5% occupancy
629
if (activevalues < sizenode(h) * 3 / 8)
630
tableresizeprotected(L, h, activevalues);
631
}
632
}
633
634
l = h->gclist;
635
}
636
return work;
637
}
638
639
static void freeobj(lua_State* L, GCObject* o, lua_Page* page)
640
{
641
switch (o->gch.tt)
642
{
643
case LUA_TPROTO:
644
luaF_freeproto(L, gco2p(o), page);
645
break;
646
case LUA_TFUNCTION:
647
luaF_freeclosure(L, gco2cl(o), page);
648
break;
649
case LUA_TUPVAL:
650
luaF_freeupval(L, gco2uv(o), page);
651
break;
652
case LUA_TTABLE:
653
luaH_free(L, gco2h(o), page);
654
break;
655
case LUA_TTHREAD:
656
LUAU_ASSERT(gco2th(o) != L && gco2th(o) != L->global->mainthread);
657
luaE_freethread(L, gco2th(o), page);
658
break;
659
case LUA_TSTRING:
660
luaS_free(L, gco2ts(o), page);
661
break;
662
case LUA_TUSERDATA:
663
luaU_freeudata(L, gco2u(o), page);
664
break;
665
case LUA_TBUFFER:
666
luaB_freebuffer(L, gco2buf(o), page);
667
break;
668
default:
669
LUAU_ASSERT(0);
670
}
671
}
672
673
static void stringresizeprotected(lua_State* L, int newsize)
674
{
675
struct CallContext
676
{
677
int newsize;
678
679
static void run(lua_State* L, void* ud)
680
{
681
CallContext* ctx = (CallContext*)ud;
682
683
luaS_resize(L, ctx->newsize);
684
}
685
} ctx = {newsize};
686
687
// the resize call can fail on exception, in which case we will continue with original size
688
int status = luaD_rawrunprotected(L, &CallContext::run, &ctx);
689
LUAU_ASSERT(status == LUA_OK || status == LUA_ERRMEM);
690
}
691
692
static void shrinkbuffers(lua_State* L)
693
{
694
global_State* g = L->global;
695
// check size of string hash
696
if (g->strt.nuse < cast_to(uint32_t, g->strt.size / 4) && g->strt.size > LUA_MINSTRTABSIZE * 2)
697
stringresizeprotected(L, g->strt.size / 2); // table is too big
698
}
699
700
static void shrinkbuffersfull(lua_State* L)
701
{
702
global_State* g = L->global;
703
// check size of string hash
704
int hashsize = g->strt.size;
705
while (g->strt.nuse < cast_to(uint32_t, hashsize / 4) && hashsize > LUA_MINSTRTABSIZE * 2)
706
hashsize /= 2;
707
if (hashsize != g->strt.size)
708
stringresizeprotected(L, hashsize); // table is too big
709
}
710
711
static bool deletegco(void* context, lua_Page* page, GCObject* gco)
712
{
713
lua_State* L = (lua_State*)context;
714
freeobj(L, gco, page);
715
return true;
716
}
717
718
void luaC_freeall(lua_State* L)
719
{
720
global_State* g = L->global;
721
722
LUAU_ASSERT(L == g->mainthread);
723
724
luaM_visitgco(L, L, deletegco);
725
726
for (int i = 0; i < g->strt.size; i++) // free all string lists
727
LUAU_ASSERT(g->strt.hash[i] == NULL);
728
729
LUAU_ASSERT(L->global->strt.nuse == 0);
730
}
731
732
static void markmt(global_State* g)
733
{
734
int i;
735
for (i = 0; i < LUA_T_COUNT; i++)
736
if (g->mt[i])
737
markobject(g, g->mt[i]);
738
}
739
740
// mark root set
741
static void markroot(lua_State* L)
742
{
743
global_State* g = L->global;
744
g->gray = NULL;
745
g->grayagain = NULL;
746
g->weak = NULL;
747
markobject(g, g->mainthread);
748
// make global table be traversed before main stack
749
markobject(g, g->mainthread->gt);
750
markvalue(g, registry(L));
751
markmt(g);
752
g->gcstate = GCSpropagate;
753
}
754
755
static size_t remarkupvals(global_State* g)
756
{
757
size_t work = 0;
758
759
for (UpVal* uv = g->uvhead.u.open.next; uv != &g->uvhead; uv = uv->u.open.next)
760
{
761
work += sizeof(UpVal);
762
763
LUAU_ASSERT(upisopen(uv));
764
LUAU_ASSERT(uv->u.open.next->u.open.prev == uv && uv->u.open.prev->u.open.next == uv);
765
LUAU_ASSERT(!isblack(obj2gco(uv))); // open upvalues are never black
766
767
if (isgray(obj2gco(uv)))
768
markvalue(g, uv->v);
769
}
770
771
return work;
772
}
773
774
static size_t clearupvals(lua_State* L)
775
{
776
global_State* g = L->global;
777
778
size_t work = 0;
779
780
for (UpVal* uv = g->uvhead.u.open.next; uv != &g->uvhead;)
781
{
782
work += sizeof(UpVal);
783
784
LUAU_ASSERT(upisopen(uv));
785
LUAU_ASSERT(uv->u.open.next->u.open.prev == uv && uv->u.open.prev->u.open.next == uv);
786
LUAU_ASSERT(!isblack(obj2gco(uv))); // open upvalues are never black
787
LUAU_ASSERT(iswhite(obj2gco(uv)) || !iscollectable(uv->v) || !iswhite(gcvalue(uv->v)));
788
789
if (uv->markedopen)
790
{
791
// upvalue is still open (belongs to alive thread)
792
LUAU_ASSERT(isgray(obj2gco(uv)));
793
uv->markedopen = 0; // for next cycle
794
uv = uv->u.open.next;
795
}
796
else
797
{
798
// upvalue is either dead, or alive but the thread is dead; unlink and close
799
UpVal* next = uv->u.open.next;
800
luaF_closeupval(L, uv, /* dead= */ iswhite(obj2gco(uv)));
801
uv = next;
802
}
803
}
804
805
return work;
806
}
807
808
static size_t atomic(lua_State* L)
809
{
810
global_State* g = L->global;
811
LUAU_ASSERT(g->gcstate == GCSatomic);
812
813
size_t work = 0;
814
815
#ifdef LUAI_GCMETRICS
816
double currts = lua_clock();
817
#endif
818
819
// remark occasional upvalues of (maybe) dead threads
820
work += remarkupvals(g);
821
// traverse objects caught by write barrier and by 'remarkupvals'
822
work += propagateall(g);
823
824
#ifdef LUAI_GCMETRICS
825
g->gcmetrics.currcycle.atomictimeupval += recordGcDeltaTime(currts);
826
#endif
827
828
// remark weak tables
829
g->gray = g->weak;
830
g->weak = NULL;
831
LUAU_ASSERT(!iswhite(obj2gco(g->mainthread)));
832
markobject(g, L); // mark running thread
833
markmt(g); // mark basic metatables (again)
834
work += propagateall(g);
835
836
#ifdef LUAI_GCMETRICS
837
g->gcmetrics.currcycle.atomictimeweak += recordGcDeltaTime(currts);
838
#endif
839
840
// remark gray again
841
g->gray = g->grayagain;
842
g->grayagain = NULL;
843
work += propagateall(g);
844
845
#ifdef LUAI_GCMETRICS
846
g->gcmetrics.currcycle.atomictimegray += recordGcDeltaTime(currts);
847
#endif
848
849
// remove collected objects from weak tables
850
work += cleartable(L, g->weak);
851
g->weak = NULL;
852
853
#ifdef LUAI_GCMETRICS
854
g->gcmetrics.currcycle.atomictimeclear += recordGcDeltaTime(currts);
855
#endif
856
857
// close orphaned live upvalues of dead threads and clear dead upvalues
858
work += clearupvals(L);
859
860
#ifdef LUAI_GCMETRICS
861
g->gcmetrics.currcycle.atomictimeupval += recordGcDeltaTime(currts);
862
#endif
863
864
// flip current white
865
g->currentwhite = cast_byte(otherwhite(g));
866
g->sweepgcopage = g->allgcopages;
867
g->gcstate = GCSsweep;
868
869
return work;
870
}
871
872
// a version of generic luaM_visitpage specialized for the main sweep stage
873
static int sweepgcopage(lua_State* L, lua_Page* page)
874
{
875
char* start;
876
char* end;
877
int busyBlocks;
878
int blockSize;
879
luaM_getpagewalkinfo(page, &start, &end, &busyBlocks, &blockSize);
880
881
LUAU_ASSERT(busyBlocks > 0);
882
883
global_State* g = L->global;
884
885
int deadmask = otherwhite(g);
886
LUAU_ASSERT(testbit(deadmask, FIXEDBIT)); // make sure we never sweep fixed objects
887
888
int newwhite = luaC_white(g);
889
890
for (char* pos = start; pos != end; pos += blockSize)
891
{
892
GCObject* gco = (GCObject*)pos;
893
894
// skip memory blocks that are already freed
895
if (gco->gch.tt == LUA_TNIL)
896
continue;
897
898
// is the object alive?
899
if ((gco->gch.marked ^ WHITEBITS) & deadmask)
900
{
901
LUAU_ASSERT(!isdead(g, gco));
902
// make it white (for next cycle)
903
gco->gch.marked = cast_byte((gco->gch.marked & maskmarks) | newwhite);
904
}
905
else
906
{
907
LUAU_ASSERT(isdead(g, gco));
908
freeobj(L, gco, page);
909
910
// if the last block was removed, page would be removed as well
911
if (--busyBlocks == 0)
912
return int(pos - start) / blockSize + 1;
913
}
914
}
915
916
return int(end - start) / blockSize;
917
}
918
919
static size_t gcstep(lua_State* L, size_t limit)
920
{
921
size_t cost = 0;
922
global_State* g = L->global;
923
switch (g->gcstate)
924
{
925
case GCSpause:
926
{
927
markroot(L); // start a new collection
928
LUAU_ASSERT(g->gcstate == GCSpropagate);
929
break;
930
}
931
case GCSpropagate:
932
{
933
while (g->gray && cost < limit)
934
{
935
cost += propagatemark(g);
936
}
937
938
if (!g->gray)
939
{
940
#ifdef LUAI_GCMETRICS
941
g->gcmetrics.currcycle.propagatework = g->gcmetrics.currcycle.explicitwork + g->gcmetrics.currcycle.assistwork;
942
#endif
943
944
// perform one iteration over 'gray again' list
945
g->gray = g->grayagain;
946
g->grayagain = NULL;
947
948
g->gcstate = GCSpropagateagain;
949
}
950
break;
951
}
952
case GCSpropagateagain:
953
{
954
while (g->gray && cost < limit)
955
{
956
cost += propagatemark(g);
957
}
958
959
if (!g->gray) // no more `gray' objects
960
{
961
#ifdef LUAI_GCMETRICS
962
g->gcmetrics.currcycle.propagateagainwork =
963
g->gcmetrics.currcycle.explicitwork + g->gcmetrics.currcycle.assistwork - g->gcmetrics.currcycle.propagatework;
964
#endif
965
966
g->gcstate = GCSatomic;
967
}
968
break;
969
}
970
case GCSatomic:
971
{
972
#ifdef LUAI_GCMETRICS
973
g->gcmetrics.currcycle.atomicstarttimestamp = lua_clock();
974
g->gcmetrics.currcycle.atomicstarttotalsizebytes = g->totalbytes;
975
#endif
976
977
g->gcstats.atomicstarttimestamp = lua_clock();
978
g->gcstats.atomicstarttotalsizebytes = g->totalbytes;
979
980
cost = atomic(L); // finish mark phase
981
982
LUAU_ASSERT(g->gcstate == GCSsweep);
983
break;
984
}
985
case GCSsweep:
986
{
987
while (g->sweepgcopage && cost < limit)
988
{
989
lua_Page* next = luaM_getnextpage(g->sweepgcopage); // page sweep might destroy the page
990
991
int steps = sweepgcopage(L, g->sweepgcopage);
992
993
g->sweepgcopage = next;
994
cost += steps * GC_SWEEPPAGESTEPCOST;
995
}
996
997
// nothing more to sweep?
998
if (g->sweepgcopage == NULL)
999
{
1000
// don't forget to visit main thread, it's the only object not allocated in GCO pages
1001
LUAU_ASSERT(!isdead(g, obj2gco(g->mainthread)));
1002
makewhite(g, obj2gco(g->mainthread)); // make it white (for next cycle)
1003
1004
shrinkbuffers(L);
1005
1006
g->gcstate = GCSpause; // end collection
1007
}
1008
break;
1009
}
1010
default:
1011
LUAU_ASSERT(!"Unexpected GC state");
1012
}
1013
return cost;
1014
}
1015
1016
static int64_t getheaptriggererroroffset(global_State* g)
1017
{
1018
// adjust for error using Proportional-Integral controller
1019
// https://en.wikipedia.org/wiki/PID_controller
1020
int32_t errorKb = int32_t((g->gcstats.atomicstarttotalsizebytes - g->gcstats.heapgoalsizebytes) / 1024);
1021
1022
// we use sliding window for the error integral to avoid error sum 'windup' when the desired target cannot be reached
1023
const size_t triggertermcount = sizeof(g->gcstats.triggerterms) / sizeof(g->gcstats.triggerterms[0]);
1024
1025
int32_t* slot = &g->gcstats.triggerterms[g->gcstats.triggertermpos % triggertermcount];
1026
int32_t prev = *slot;
1027
*slot = errorKb;
1028
g->gcstats.triggerintegral += errorKb - prev;
1029
g->gcstats.triggertermpos++;
1030
1031
// controller tuning
1032
// https://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method
1033
const double Ku = 0.9; // ultimate gain (measured)
1034
const double Tu = 2.5; // oscillation period (measured)
1035
1036
const double Kp = 0.45 * Ku; // proportional gain
1037
const double Ti = 0.8 * Tu;
1038
const double Ki = 0.54 * Ku / Ti; // integral gain
1039
1040
double proportionalTerm = Kp * errorKb;
1041
double integralTerm = Ki * g->gcstats.triggerintegral;
1042
1043
double totalTerm = proportionalTerm + integralTerm;
1044
1045
return int64_t(totalTerm * 1024);
1046
}
1047
1048
static size_t getheaptrigger(global_State* g, size_t heapgoal)
1049
{
1050
// adjust threshold based on a guess of how many bytes will be allocated between the cycle start and sweep phase
1051
// our goal is to begin the sweep when used memory has reached the heap goal
1052
const double durationthreshold = 1e-3;
1053
double allocationduration = g->gcstats.atomicstarttimestamp - g->gcstats.endtimestamp;
1054
1055
// avoid measuring intervals smaller than 1ms
1056
if (allocationduration < durationthreshold)
1057
return heapgoal;
1058
1059
double allocationrate = (g->gcstats.atomicstarttotalsizebytes - g->gcstats.endtotalsizebytes) / allocationduration;
1060
double markduration = g->gcstats.atomicstarttimestamp - g->gcstats.starttimestamp;
1061
1062
int64_t expectedgrowth = int64_t(markduration * allocationrate);
1063
int64_t offset = getheaptriggererroroffset(g);
1064
int64_t heaptrigger = heapgoal - (expectedgrowth + offset);
1065
1066
// clamp the trigger between memory use at the end of the cycle and the heap goal
1067
return heaptrigger < int64_t(g->totalbytes) ? g->totalbytes : (heaptrigger > int64_t(heapgoal) ? heapgoal : size_t(heaptrigger));
1068
}
1069
1070
size_t luaC_step(lua_State* L, bool assist)
1071
{
1072
global_State* g = L->global;
1073
1074
int lim = g->gcstepsize * g->gcstepmul / 100; // how much to work
1075
LUAU_ASSERT(g->totalbytes >= g->GCthreshold);
1076
size_t debt = g->totalbytes - g->GCthreshold;
1077
1078
GC_INTERRUPT(0);
1079
1080
// at the start of the new cycle
1081
if (g->gcstate == GCSpause)
1082
g->gcstats.starttimestamp = lua_clock();
1083
1084
#ifdef LUAI_GCMETRICS
1085
if (g->gcstate == GCSpause)
1086
startGcCycleMetrics(g);
1087
1088
double lasttimestamp = lua_clock();
1089
#endif
1090
1091
int lastgcstate = g->gcstate;
1092
1093
size_t work = gcstep(L, lim);
1094
1095
#ifdef LUAI_GCMETRICS
1096
recordGcStateStep(g, lastgcstate, lua_clock() - lasttimestamp, assist, work);
1097
#endif
1098
1099
size_t actualstepsize = work * 100 / g->gcstepmul;
1100
1101
// at the end of the last cycle
1102
if (g->gcstate == GCSpause)
1103
{
1104
// at the end of a collection cycle, set goal based on gcgoal setting
1105
size_t heapgoal = (g->totalbytes / 100) * g->gcgoal;
1106
size_t heaptrigger = getheaptrigger(g, heapgoal);
1107
1108
g->GCthreshold = heaptrigger;
1109
1110
g->gcstats.heapgoalsizebytes = heapgoal;
1111
g->gcstats.endtimestamp = lua_clock();
1112
g->gcstats.endtotalsizebytes = g->totalbytes;
1113
1114
#ifdef LUAI_GCMETRICS
1115
finishGcCycleMetrics(g);
1116
#endif
1117
}
1118
else
1119
{
1120
g->GCthreshold = g->totalbytes + actualstepsize;
1121
1122
// compensate if GC is "behind schedule" (has some debt to pay)
1123
if (g->GCthreshold >= debt)
1124
g->GCthreshold -= debt;
1125
}
1126
1127
GC_INTERRUPT(lastgcstate);
1128
1129
return actualstepsize;
1130
}
1131
1132
void luaC_fullgc(lua_State* L)
1133
{
1134
global_State* g = L->global;
1135
1136
#ifdef LUAI_GCMETRICS
1137
if (g->gcstate == GCSpause)
1138
startGcCycleMetrics(g);
1139
#endif
1140
1141
if (keepinvariant(g))
1142
{
1143
// reset sweep marks to sweep all elements (returning them to white)
1144
g->sweepgcopage = g->allgcopages;
1145
// reset other collector lists
1146
g->gray = NULL;
1147
g->grayagain = NULL;
1148
g->weak = NULL;
1149
g->gcstate = GCSsweep;
1150
}
1151
LUAU_ASSERT(g->gcstate == GCSpause || g->gcstate == GCSsweep);
1152
// finish any pending sweep phase
1153
while (g->gcstate != GCSpause)
1154
{
1155
LUAU_ASSERT(g->gcstate == GCSsweep);
1156
gcstep(L, SIZE_MAX);
1157
}
1158
1159
// clear markedopen bits for all open upvalues; these might be stuck from half-finished mark prior to full gc
1160
for (UpVal* uv = g->uvhead.u.open.next; uv != &g->uvhead; uv = uv->u.open.next)
1161
{
1162
LUAU_ASSERT(upisopen(uv));
1163
uv->markedopen = 0;
1164
}
1165
1166
#ifdef LUAI_GCMETRICS
1167
finishGcCycleMetrics(g);
1168
startGcCycleMetrics(g);
1169
#endif
1170
1171
// run a full collection cycle
1172
markroot(L);
1173
while (g->gcstate != GCSpause)
1174
{
1175
gcstep(L, SIZE_MAX);
1176
}
1177
// reclaim as much buffer memory as possible (shrinkbuffers() called during sweep is incremental)
1178
shrinkbuffersfull(L);
1179
1180
size_t heapgoalsizebytes = (g->totalbytes / 100) * g->gcgoal;
1181
1182
// trigger cannot be correctly adjusted after a forced full GC.
1183
// we will try to place it so that we can reach the goal based on
1184
// the rate at which we run the GC relative to allocation rate
1185
// and on amount of bytes we need to traverse in propagation stage.
1186
// goal and stepmul are defined in percents
1187
g->GCthreshold = g->totalbytes * (g->gcgoal * g->gcstepmul / 100 - 100) / g->gcstepmul;
1188
1189
// but it might be impossible to satisfy that directly
1190
if (g->GCthreshold < g->totalbytes)
1191
g->GCthreshold = g->totalbytes;
1192
1193
g->gcstats.heapgoalsizebytes = heapgoalsizebytes;
1194
1195
#ifdef LUAI_GCMETRICS
1196
finishGcCycleMetrics(g);
1197
#endif
1198
}
1199
1200
void luaC_barrierf(lua_State* L, GCObject* o, GCObject* v)
1201
{
1202
global_State* g = L->global;
1203
LUAU_ASSERT(isblack(o) && iswhite(v) && !isdead(g, v) && !isdead(g, o));
1204
LUAU_ASSERT(g->gcstate != GCSpause);
1205
// must keep invariant?
1206
if (keepinvariant(g))
1207
reallymarkobject(g, v); // restore invariant
1208
else // don't mind
1209
makewhite(g, o); // mark as white just to avoid other barriers
1210
}
1211
1212
void luaC_barriertable(lua_State* L, LuaTable* t, GCObject* v)
1213
{
1214
global_State* g = L->global;
1215
GCObject* o = obj2gco(t);
1216
1217
// in the second propagation stage, table assignment barrier works as a forward barrier
1218
if (g->gcstate == GCSpropagateagain)
1219
{
1220
LUAU_ASSERT(isblack(o) && iswhite(v) && !isdead(g, v) && !isdead(g, o));
1221
reallymarkobject(g, v);
1222
return;
1223
}
1224
1225
LUAU_ASSERT(isblack(o) && !isdead(g, o));
1226
LUAU_ASSERT(g->gcstate != GCSpause);
1227
black2gray(o); // make table gray (again)
1228
t->gclist = g->grayagain;
1229
g->grayagain = o;
1230
}
1231
1232
void luaC_barrierback(lua_State* L, GCObject* o, GCObject** gclist)
1233
{
1234
global_State* g = L->global;
1235
LUAU_ASSERT(isblack(o) && !isdead(g, o));
1236
LUAU_ASSERT(g->gcstate != GCSpause);
1237
1238
black2gray(o); // make object gray (again)
1239
*gclist = g->grayagain;
1240
g->grayagain = o;
1241
}
1242
1243
void luaC_upvalclosed(lua_State* L, UpVal* uv)
1244
{
1245
global_State* g = L->global;
1246
GCObject* o = obj2gco(uv);
1247
1248
LUAU_ASSERT(!upisopen(uv)); // upvalue was closed but needs GC state fixup
1249
1250
if (isgray(o))
1251
{
1252
if (keepinvariant(g))
1253
{
1254
gray2black(o); // closed upvalues need barrier
1255
luaC_barrier(L, uv, uv->v);
1256
}
1257
else
1258
{ // sweep phase: sweep it (turning it into white)
1259
makewhite(g, o);
1260
LUAU_ASSERT(g->gcstate != GCSpause);
1261
}
1262
}
1263
}
1264
1265
// measure the allocation rate in bytes/sec
1266
// returns -1 if allocation rate cannot be measured
1267
int64_t luaC_allocationrate(lua_State* L)
1268
{
1269
global_State* g = L->global;
1270
const double durationthreshold = 1e-3; // avoid measuring intervals smaller than 1ms
1271
1272
if (g->gcstate <= GCSatomic)
1273
{
1274
double duration = lua_clock() - g->gcstats.endtimestamp;
1275
1276
if (duration < durationthreshold)
1277
return -1;
1278
1279
return int64_t((g->totalbytes - g->gcstats.endtotalsizebytes) / duration);
1280
}
1281
1282
// totalbytes is unstable during the sweep, use the rate measured at the end of mark phase
1283
double duration = g->gcstats.atomicstarttimestamp - g->gcstats.endtimestamp;
1284
1285
if (duration < durationthreshold)
1286
return -1;
1287
1288
return int64_t((g->gcstats.atomicstarttotalsizebytes - g->gcstats.endtotalsizebytes) / duration);
1289
}
1290
1291
const char* luaC_statename(int state)
1292
{
1293
switch (state)
1294
{
1295
case GCSpause:
1296
return "pause";
1297
1298
case GCSpropagate:
1299
return "mark";
1300
1301
case GCSpropagateagain:
1302
return "remark";
1303
1304
case GCSatomic:
1305
return "atomic";
1306
1307
case GCSsweep:
1308
return "sweep";
1309
1310
default:
1311
return NULL;
1312
}
1313
}
1314
1315