Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
bevyengine
GitHub Repository: bevyengine/bevy
Path: blob/main/crates/bevy_ecs/src/entity/remote_allocator.rs
9356 views
1
//! This module contains the guts of Bevy's entity allocator.
2
//!
3
//! Entity allocation needs to work concurrently and remotely.
4
//! Remote allocations (where no reference to the world is held) is needed for long running tasks, such as loading assets on separate threads.
5
//! Non-remote, "normal" allocation needs to be as fast as possible while still supporting remote allocation.
6
//!
7
//! The allocator fundamentally is made of a cursor for the next fresh, never used [`EntityIndex`] and a free list.
8
//! The free list is a collection that holds [`Entity`] values that were used and can be reused; they are "free"/available.
9
//! If the free list is empty, it's really simple to just increment the fresh index cursor.
10
//! The tricky part is implementing a remotely accessible free list.
11
//!
12
//! A naive free list could just a concurrent queue.
13
//! That would probably be fine for remote allocation but for non-remote, we can go much faster.
14
//! In particular, a concurrent queue must do additional work to handle cases where something is added concurrently with being removed.
15
//! But for non-remote allocation, we can guarantee that no free will happen during an allocation since `free` needs mutably access to the world already.
16
//! That means, we can skip a lot of those safety checks.
17
//! Plus, we know the maximum size of the free list ahead of time, since we can assume there are no duplicates.
18
//! That means, we can have a much more efficient allocation scheme, far better than a linked list.
19
//!
20
//! For the free list, the list needs to be pinned in memory and yet grow-able.
21
//! That's quite the pickle, but by splitting the growth over multiple arrays, this isn't so bad.
22
//! When the list needs to grow, we just *add* on another array to the buffer (instead of *replacing* the old one with a bigger one).
23
//! These arrays are called [`Chunk`]s.
24
//! This keeps everything pinned, and since we know the maximum size ahead of time, we can make this mapping very fast.
25
//!
26
//! Similar to how `Vec` is implemented, the free list is implemented as a [`FreeBuffer`] (handling allocations and implicit capacity)
27
//! and the [`FreeCount`] manages the length of the free list.
28
//! The free list's item is a [`Slot`], which manages accessing each item concurrently.
29
//!
30
//! These types are summed up in [`SharedAllocator`], which is highly unsafe.
31
//! The interfaces [`Allocator`] and [`RemoteAllocator`] provide safe interfaces to them.
32
33
use arrayvec::ArrayVec;
34
use bevy_platform::{
35
cell::SyncUnsafeCell,
36
prelude::{Box, Vec},
37
sync::{
38
atomic::{AtomicBool, AtomicPtr, AtomicU32, AtomicU64, Ordering},
39
Arc,
40
},
41
};
42
use core::mem::ManuallyDrop;
43
use log::warn;
44
use nonmax::NonMaxU32;
45
46
use crate::query::DebugCheckedUnwrap;
47
48
use super::{Entity, EntityIndex, EntitySetIterator};
49
50
/// This is the item we store in the free list.
51
/// Effectively, this is a `MaybeUninit<Entity>` where uninit is represented by `Entity::PLACEHOLDER`.
52
struct Slot {
53
inner: SyncUnsafeCell<Entity>,
54
}
55
56
impl Slot {
57
/// Produces a meaningless empty value. This is a valid but incorrect `Entity`.
58
/// It's valid because the bits do represent a valid bit pattern of an `Entity`.
59
/// It's incorrect because this is in the free buffer even though the entity was never freed.
60
/// Importantly, [`FreeCount`] determines which part of the free buffer is the free list.
61
/// An empty slot may be in the free buffer, but should not be in the free list.
62
/// This can be thought of as the `MaybeUninit` uninit in `Vec`'s excess capacity.
63
const fn empty() -> Self {
64
let source = Entity::PLACEHOLDER;
65
Self {
66
inner: SyncUnsafeCell::new(source),
67
}
68
}
69
70
/// Sets the entity at this slot.
71
///
72
/// # Safety
73
///
74
/// There must be a clear, strict order between this call and the previous uses of this [`Slot`].
75
/// Otherwise, the compiler will make unsound optimizations.
76
#[inline]
77
const unsafe fn set_entity(&self, entity: Entity) {
78
// SAFETY: Ensured by caller.
79
unsafe {
80
self.inner.get().write(entity);
81
}
82
}
83
84
/// Gets the stored entity. The result will be [`Entity::PLACEHOLDER`] unless [`set_entity`](Self::set_entity) has been called.
85
///
86
/// # Safety
87
///
88
/// There must be a clear, strict order between this call and the previous uses of this [`Slot`].
89
/// Otherwise, the compiler will make unsound optimizations.
90
#[inline]
91
const unsafe fn get_entity(&self) -> Entity {
92
// SAFETY: Ensured by caller.
93
unsafe { self.inner.get().read() }
94
}
95
}
96
97
/// Each chunk stores a buffer of [`Slot`]s at a fixed capacity.
98
struct Chunk {
99
/// Points to the first slot. If this is null, we need to allocate it.
100
first: AtomicPtr<Slot>,
101
}
102
103
impl Chunk {
104
/// Constructs a null [`Chunk`].
105
const fn new() -> Self {
106
Self {
107
first: AtomicPtr::new(core::ptr::null_mut()),
108
}
109
}
110
111
/// Gets the entity at the index within this chunk.
112
///
113
/// # Safety
114
///
115
/// [`Self::set`] must have been called on this index before, ensuring it is in bounds and the chunk is initialized.
116
/// There must be a clear, strict order between this call and the previous uses of this `index`.
117
/// Otherwise, the compiler will make unsound optimizations.
118
#[inline]
119
unsafe fn get(&self, index: u32) -> Entity {
120
// Relaxed is fine since caller has already assured memory ordering is satisfied.
121
let head = self.first.load(Ordering::Relaxed);
122
// SAFETY: caller ensures we are in bounds and init (because `set` must be in bounds)
123
let target = unsafe { &*head.add(index as usize) };
124
// SAFETY: Caller ensures ordering.
125
unsafe { target.get_entity() }
126
}
127
128
/// Gets a slice of indices.
129
///
130
/// # Safety
131
///
132
/// [`Self::set`] must have been called on these indices before, ensuring it is in bounds and the chunk is initialized.
133
#[inline]
134
unsafe fn get_slice(&self, index: u32, ideal_len: u32, chunk_capacity: u32) -> &[Slot] {
135
let after_index_slice_len = chunk_capacity - index;
136
let len = after_index_slice_len.min(ideal_len) as usize;
137
138
// Relaxed is fine since caller ensures we are initialized already.
139
// In order for the caller to guarantee that, they must have an ordering that orders this `get` after the required `set`.
140
let head = self.first.load(Ordering::Relaxed);
141
142
// SAFETY: Caller ensures we are init, so the chunk was allocated via a `Vec` and the index is within the capacity.
143
unsafe { core::slice::from_raw_parts(head.add(index as usize), len) }
144
}
145
146
/// Sets this entity at this index.
147
///
148
/// # Safety
149
///
150
/// Index must be in bounds.
151
/// There must be a clear, strict order between this call and the previous uses of this `index`.
152
/// Otherwise, the compiler will make unsound optimizations.
153
/// This must not be called on the same chunk concurrently.
154
#[inline]
155
unsafe fn set(&self, index: u32, entity: Entity, chunk_capacity: u32) {
156
// Relaxed is fine here since the caller ensures memory ordering.
157
let ptr = self.first.load(Ordering::Relaxed);
158
let head = if ptr.is_null() {
159
// SAFETY: Ensured by caller.
160
unsafe { self.init(chunk_capacity) }
161
} else {
162
ptr
163
};
164
165
// SAFETY: caller ensures it is in bounds and we are not fighting with other `set` calls or `get` calls.
166
// A race condition is therefore impossible.
167
// The address can't wrap or pass isize max since this addition is within an allocation.
168
// For that to happen, you would first run out of memory in practice.
169
let target = unsafe { &*head.add(index as usize) };
170
171
// SAFETY: Ensured by caller.
172
unsafe {
173
target.set_entity(entity);
174
}
175
}
176
177
/// Initializes the chunk to be valid, returning the pointer.
178
///
179
/// # Safety
180
///
181
/// This must not be called concurrently with itself.
182
#[cold]
183
unsafe fn init(&self, chunk_capacity: u32) -> *mut Slot {
184
let mut buff = ManuallyDrop::new(Vec::new());
185
buff.reserve_exact(chunk_capacity as usize);
186
buff.resize_with(chunk_capacity as usize, Slot::empty);
187
let ptr = buff.as_mut_ptr();
188
// Relaxed is fine here since this is not called concurrently.
189
self.first.store(ptr, Ordering::Relaxed);
190
ptr
191
}
192
193
/// Frees memory
194
///
195
/// # Safety
196
///
197
/// `chunk_capacity` must be the same as it was initialized with.
198
unsafe fn dealloc(&mut self, chunk_capacity: u32) {
199
let to_drop = *self.first.get_mut();
200
if !to_drop.is_null() {
201
// SAFETY: This was created in [`Self::init`] from a standard Vec.
202
unsafe {
203
Vec::from_raw_parts(to_drop, chunk_capacity as usize, chunk_capacity as usize);
204
}
205
}
206
}
207
}
208
209
/// This is a buffer that has been split into power-of-two sized chunks, so that each chunk is pinned in memory.
210
/// Conceptually, each chunk is put end-to-end to form the buffer. This ultimately avoids copying elements on resize,
211
/// while allowing it to expand in capacity as needed. A separate system must track the length of the list in the buffer.
212
/// Each chunk is twice as large as the last, except for the first two which have a capacity of 512.
213
struct FreeBuffer([Chunk; Self::NUM_CHUNKS as usize]);
214
215
impl FreeBuffer {
216
const NUM_CHUNKS: u32 = 24;
217
const NUM_SKIPPED: u32 = u32::BITS - Self::NUM_CHUNKS;
218
219
/// Constructs an empty [`FreeBuffer`].
220
const fn new() -> Self {
221
Self([const { Chunk::new() }; Self::NUM_CHUNKS as usize])
222
}
223
224
/// Computes the capacity of the chunk at this index within [`Self::NUM_CHUNKS`].
225
/// The first 2 have length 512 (2^9) and the last has length (2^31)
226
#[inline]
227
const fn capacity_of_chunk(chunk_index: u32) -> u32 {
228
// We do this because we're skipping the first `NUM_SKIPPED` powers, so we need to make up for them by doubling the first index.
229
// This is why the first 2 indices both have a capacity of 512.
230
let corrected = if chunk_index == 0 { 1 } else { chunk_index };
231
// We add NUM_SKIPPED because the total capacity should be as if [`Self::NUM_CHUNKS`] were 32.
232
// This skips the first NUM_SKIPPED powers.
233
let corrected = corrected + Self::NUM_SKIPPED;
234
// This bit shift is just 2^corrected.
235
1 << corrected
236
}
237
238
/// For this index in the whole buffer, returns the index of the [`Chunk`], the index within that chunk, and the capacity of that chunk.
239
#[inline]
240
const fn index_info(full_index: u32) -> (u32, u32, u32) {
241
// We do a `saturating_sub` because we skip the first `NUM_SKIPPED` powers to make space for the first chunk's entity count.
242
// The -1 is because this is the number of chunks, but we want the index in the end.
243
// We store chunks in smallest to biggest order, so we need to reverse it.
244
let chunk_index = (Self::NUM_CHUNKS - 1).saturating_sub(full_index.leading_zeros());
245
let chunk_capacity = Self::capacity_of_chunk(chunk_index);
246
// We only need to cut off this particular bit.
247
// The capacity is only one bit, and if other bits needed to be dropped, `leading` would have been greater
248
let index_in_chunk = full_index & !chunk_capacity;
249
250
(chunk_index, index_in_chunk, chunk_capacity)
251
}
252
253
/// For this index in the whole buffer, returns the [`Chunk`], the index within that chunk, and the capacity of that chunk.
254
#[inline]
255
fn index_in_chunk(&self, full_index: u32) -> (&Chunk, u32, u32) {
256
let (chunk_index, index_in_chunk, chunk_capacity) = Self::index_info(full_index);
257
// SAFETY: The `index_info` is correct.
258
let chunk = unsafe { self.0.get_unchecked(chunk_index as usize) };
259
(chunk, index_in_chunk, chunk_capacity)
260
}
261
262
/// Gets the entity at an index.
263
///
264
/// # Safety
265
///
266
/// [`set`](Self::set) must have been called on this index to initialize its memory.
267
/// There must be a clear, strict order between this call and the previous uses of this `full_index`.
268
/// Otherwise, the compiler will make unsound optimizations.
269
unsafe fn get(&self, full_index: u32) -> Entity {
270
let (chunk, index, _) = self.index_in_chunk(full_index);
271
// SAFETY: Ensured by caller.
272
unsafe { chunk.get(index) }
273
}
274
275
/// Sets an entity at an index.
276
///
277
/// # Safety
278
///
279
/// There must be a clear, strict order between this call and the previous uses of this `full_index`.
280
/// Otherwise, the compiler will make unsound optimizations.
281
/// This must not be called on the same buffer concurrently.
282
#[inline]
283
unsafe fn set(&self, full_index: u32, entity: Entity) {
284
let (chunk, index, chunk_capacity) = self.index_in_chunk(full_index);
285
// SAFETY: Ensured by caller and that the index is correct.
286
unsafe { chunk.set(index, entity, chunk_capacity) }
287
}
288
289
/// Iterates the entities in these indices.
290
///
291
/// # Safety
292
///
293
/// [`Self::set`] must have been called on these indices before to initialize memory.
294
/// There must be a clear, strict order between this call and the previous uses of these `indices`.
295
/// Note that until the returned value is dropped, these `indices` are still being accessed,
296
/// making safety for other operations afterward need careful justification.
297
/// Otherwise, the compiler will make unsound optimizations.
298
#[inline]
299
unsafe fn iter(&self, indices: core::ops::Range<u32>) -> FreeBufferIterator<'_> {
300
FreeBufferIterator {
301
buffer: self,
302
future_buffer_indices: indices,
303
current_chunk_slice: [].iter(),
304
}
305
}
306
}
307
308
impl Drop for FreeBuffer {
309
fn drop(&mut self) {
310
for index in 0..Self::NUM_CHUNKS {
311
let capacity = Self::capacity_of_chunk(index);
312
// SAFETY: we have `&mut` and the capacity is correct.
313
unsafe { self.0[index as usize].dealloc(capacity) };
314
}
315
}
316
}
317
318
/// An iterator over a [`FreeBuffer`].
319
///
320
/// # Safety
321
///
322
/// [`FreeBuffer::set`] must have been called on these indices beforehand to initialize memory.
323
struct FreeBufferIterator<'a> {
324
buffer: &'a FreeBuffer,
325
/// The part of the buffer we are iterating at the moment.
326
current_chunk_slice: core::slice::Iter<'a, Slot>,
327
/// The indices in the buffer that are not yet in `current_chunk_slice`.
328
future_buffer_indices: core::ops::Range<u32>,
329
}
330
331
impl<'a> Iterator for FreeBufferIterator<'a> {
332
type Item = Entity;
333
334
#[inline]
335
fn next(&mut self) -> Option<Self::Item> {
336
if let Some(found) = self.current_chunk_slice.next() {
337
// SAFETY: We have `&mut self`, so that memory order is certain.
338
// The caller of `FreeBuffer::iter` ensures the memory order of this value's lifetime.
339
return Some(unsafe { found.get_entity() });
340
}
341
342
let still_need = self.future_buffer_indices.len() as u32;
343
if still_need == 0 {
344
return None;
345
}
346
let next_index = self.future_buffer_indices.start;
347
let (chunk, index, chunk_capacity) = self.buffer.index_in_chunk(next_index);
348
349
// SAFETY: Assured by `FreeBuffer::iter`
350
let slice = unsafe { chunk.get_slice(index, still_need, chunk_capacity) };
351
self.future_buffer_indices.start += slice.len() as u32;
352
self.current_chunk_slice = slice.iter();
353
354
// SAFETY: Constructor ensures these indices are valid in the buffer; the buffer is not sparse, and we just got the next slice.
355
// So the only way for the slice to be empty is if the constructor did not uphold safety.
356
let next = unsafe { self.current_chunk_slice.next().debug_checked_unwrap() };
357
// SAFETY: We have `&mut self`, so that memory order is certain.
358
// The caller of `FreeBuffer::iter` ensures the memory order of this value's lifetime.
359
Some(unsafe { next.get_entity() })
360
}
361
362
#[inline]
363
fn size_hint(&self) -> (usize, Option<usize>) {
364
let len = self.future_buffer_indices.len() + self.current_chunk_slice.len();
365
(len, Some(len))
366
}
367
}
368
369
impl<'a> ExactSizeIterator for FreeBufferIterator<'a> {}
370
impl<'a> core::iter::FusedIterator for FreeBufferIterator<'a> {}
371
372
/// This tracks the state of a [`FreeCount`], which has lots of information packed into it.
373
///
374
/// This has three jobs:
375
///
376
/// - First, obviously, this needs to track the length of the free list.
377
/// When the length is 0, we use the [`FreshAllocator`]; otherwise, we pop.
378
/// The length also tells us where on the list to push freed entities to.
379
/// - Second, we need to be able to "freeze" the length for remote allocations.
380
/// This happens when pushing to the list; we need to prevent a push and remote pop from happening at the same time.
381
/// We call this "disabling the length".
382
/// When it is disabled, only the thing that disabled it is allowed to re-enable it.
383
/// This is like a mutex, but it's faster because we pack the mutex into the same bits as the state.
384
/// See [`FreeCount::disable_len_for_state`] and [`FreeCount::set_state_risky`] for how this can be done.
385
/// - Third, we need to track the generation of the free list.
386
/// That is, any two distinct states of the free list, even if they are the same length, must have different [`FreeCount`] values.
387
/// This becomes important when a remote allocator needs to know if the information it is working with has been outdated.
388
/// See [`FreeList::remote_alloc`] for why this is so important.
389
///
390
/// As if that isn't hard enough, we need to do all three of these things in the same [`AtomicU64`] for performance.
391
/// Not only that, but for memory ordering guarantees, we need to be able to change the length and generation in a single atomic operation.
392
/// We do that with a very specific bit layout:
393
///
394
/// - The least significant 33 bits store a signed 33 bit integer for the length.
395
/// This behaves like a u33, but we define `1 << 32` as 0.
396
/// - The 34th bit stores a flag that indicates if the length has been disabled.
397
/// - The remaining 30 bits are the generation.
398
/// The generation helps differentiates different versions of the state that happen to encode the same length.
399
///
400
/// Why this layout?
401
/// A few observations:
402
/// First, since the disabling mechanic acts as a mutex, we only need one bit for that, and we can use bit operations to interact with it.
403
/// That leaves the length and the generation (which we need to distinguish between two states of the free list that happen to be the same length).
404
/// Every change to the length must be/cause a change to the [`FreeCountState`] such that the new state does not equal any previous state.
405
/// The second observation is that we only need to change the generation when we move the length in one direction.
406
/// Here, we tie popping/allocation to a generation change.
407
/// When the length increases, the length part of the state changes, so a generation change is a moot point. (Ex `L0-G0` -> `L1G0`)
408
/// When the length decreases, we also need to change the generation to distinguish the states. (Ex `L1-G0` -> `L0G1`)
409
///
410
/// We need the generation to freely wrap.
411
/// In this case, the generation is 30 bits, so after 2 ^ 30 allocations, the generation will wrap.
412
/// That is technically a soundness concern,
413
/// but it would only cause a problem if the same [`FreeList::remote_alloc`] procedure had been sleeping for all 2 ^ 30 allocations and then when it woke up, all 2 ^ 30 allocations had been freed.
414
/// This is impossibly unlikely and is safely ignored in other concurrent queue implementations.
415
/// Still, we need the generation to wrap; it must not overflow into the length bits.
416
/// As a result, the generation bits *must* be the most significant; this allows them to wrap freely.
417
///
418
/// It is convenient to put the disabling bit next since that leaves the length bits already aligned to the least significant bits.
419
/// That saves us a bit shift!
420
///
421
/// But now we need to stop the length information from messing with the generation or disabling bits.
422
/// Preventing overflow is easy since we can assume the list is unique and there are only `u32::MAX` [`Entity`] values.
423
/// We can't prevent underflow with just 32 bits, and performance prevents us from running checks before a subtraction.
424
/// But we do know that it can't overflow more than `u32::MAX` times because that would cause the [`FreshAllocator`] to overflow and panic for allocating too many entities.
425
/// That means we need to represent "length" values in `±u32::MAX` range, which gives us an `i33` that we then saturatingly cast to `u32`.
426
/// As mentioned above, we represent this `i33` as a `u33` where we define `1 << 32` as 0.
427
/// This representation works slightly easier for the `saturating_sub` in [`FreeCountState::length`] than a true `i33` representation.
428
#[derive(Clone, Copy)]
429
struct FreeCountState(u64);
430
431
impl FreeCountState {
432
/// When this bit is on, the count is disabled.
433
/// This is used to prevent remote allocations from running at the same time as a free operation.
434
const DISABLING_BIT: u64 = 1 << 33;
435
/// This is the mask for the length bits.
436
const LENGTH_MASK: u64 = (1 << 32) | u32::MAX as u64;
437
/// This is the value of the length mask we consider to be 0.
438
const LENGTH_0: u64 = 1 << 32;
439
/// This is the lowest bit in the u30 generation.
440
const GENERATION_LEAST_BIT: u64 = 1 << 34;
441
442
/// Constructs a length of 0.
443
const fn new_zero_len() -> Self {
444
Self(Self::LENGTH_0)
445
}
446
447
/// Gets the encoded length.
448
#[inline]
449
const fn length(self) -> u32 {
450
let unsigned_length = self.0 & Self::LENGTH_MASK;
451
unsigned_length.saturating_sub(Self::LENGTH_0) as u32
452
}
453
454
/// Returns whether or not the count is disabled.
455
#[inline]
456
const fn is_disabled(self) -> bool {
457
(self.0 & Self::DISABLING_BIT) > 0
458
}
459
460
/// Changes only the length of this count to `length`.
461
#[inline]
462
const fn with_length(self, length: u32) -> Self {
463
// Just turns on the "considered zero" bit since this is non-negative.
464
let length = length as u64 | Self::LENGTH_0;
465
Self(self.0 & !Self::LENGTH_MASK | length)
466
}
467
468
/// For popping `num` off the count, subtract the resulting u64.
469
#[inline]
470
const fn encode_pop(num: u32) -> u64 {
471
let subtract_length = num as u64;
472
// Also subtract one from the generation bit.
473
subtract_length | Self::GENERATION_LEAST_BIT
474
}
475
476
/// Returns the count after popping off `num` elements.
477
#[inline]
478
const fn pop(self, num: u32) -> Self {
479
Self(self.0.wrapping_sub(Self::encode_pop(num)))
480
}
481
}
482
483
/// This is an atomic interface to [`FreeCountState`].
484
struct FreeCount(AtomicU64);
485
486
impl FreeCount {
487
/// Constructs a length of 0.
488
const fn new_zero_len() -> Self {
489
Self(AtomicU64::new(FreeCountState::new_zero_len().0))
490
}
491
492
/// Gets the current state of the buffer.
493
#[inline]
494
fn state(&self, order: Ordering) -> FreeCountState {
495
FreeCountState(self.0.load(order))
496
}
497
498
/// Subtracts `num` from the length, returning the previous state.
499
///
500
/// **NOTE:** Caller should be careful that changing the state is allowed and that the state is not disabled.
501
#[inline]
502
fn pop_for_state(&self, num: u32, order: Ordering) -> FreeCountState {
503
let to_sub = FreeCountState::encode_pop(num);
504
let raw = self.0.fetch_sub(to_sub, order);
505
FreeCountState(raw)
506
}
507
508
/// Marks the state as disabled, returning the previous state
509
/// When the length is disabled, [`try_set_state`](Self::try_set_state) will fail.
510
/// This is used to prevent remote allocation during a free.
511
#[inline]
512
fn disable_len_for_state(&self, order: Ordering) -> FreeCountState {
513
// We don't care about the generation here since this changes the value anyway.
514
FreeCountState(self.0.fetch_or(FreeCountState::DISABLING_BIT, order))
515
}
516
517
/// Sets the state explicitly.
518
/// Caller must be careful that the state has not changed since getting the state and setting it.
519
/// If that happens, the state may not properly reflect the length of the free list or its generation,
520
/// causing entities to be skipped or given out twice.
521
/// This is not a safety concern, but it is a major correctness concern.
522
#[inline]
523
fn set_state_risky(&self, state: FreeCountState, order: Ordering) {
524
self.0.store(state.0, order);
525
}
526
527
/// Attempts to update the state, returning the new [`FreeCountState`] if it fails.
528
#[inline]
529
fn try_set_state(
530
&self,
531
expected_current_state: FreeCountState,
532
target_state: FreeCountState,
533
success: Ordering,
534
failure: Ordering,
535
) -> Result<(), FreeCountState> {
536
match self
537
.0
538
.compare_exchange(expected_current_state.0, target_state.0, success, failure)
539
{
540
Ok(_) => Ok(()),
541
Err(val) => Err(FreeCountState(val)),
542
}
543
}
544
}
545
546
/// This is conceptually like a `Vec<Entity>` that stores entities pending reuse.
547
struct FreeList {
548
/// The actual buffer of [`Slot`]s.
549
/// Conceptually, this is like the `RawVec` for this `Vec`.
550
buffer: FreeBuffer,
551
/// The length of the free buffer
552
len: FreeCount,
553
}
554
555
impl FreeList {
556
/// Constructs a empty [`FreeList`].
557
fn new() -> Self {
558
Self {
559
buffer: FreeBuffer::new(),
560
len: FreeCount::new_zero_len(),
561
}
562
}
563
564
/// Gets the number of free entities.
565
///
566
/// # Risk
567
///
568
/// For this to be accurate, this must not be called during a [`Self::free`].
569
#[inline]
570
fn num_free(&self) -> u32 {
571
// Relaxed ordering is fine since this doesn't act on the length value in memory.
572
self.len.state(Ordering::Relaxed).length()
573
}
574
575
/// Frees the `entities` allowing them to be reused.
576
///
577
/// # Safety
578
///
579
/// There must be a clear, strict order between this call and calls to [`Self::free`], [`Self::alloc_many`], and [`Self::alloc`].
580
/// Otherwise, the compiler will make unsound optimizations.
581
#[inline]
582
unsafe fn free(&self, entities: &[Entity]) {
583
// Disable remote allocation.
584
// We don't need to acquire the most recent memory from remote threads because we never read it.
585
// We do not need to release to remote threads because we only changed the disabled bit,
586
// which the remote allocator would with relaxed ordering.
587
let state = self.len.disable_len_for_state(Ordering::Relaxed);
588
589
// Append onto the buffer
590
let mut len = state.length();
591
// `for_each` is typically faster than `for` here.
592
entities.iter().for_each(|&entity| {
593
// SAFETY: Caller ensures this does not conflict with `free` or `alloc` calls,
594
// and we just disabled remote allocation with a strict memory ordering.
595
// We only call `set` during a free, and the caller ensures that is not called concurrently.
596
unsafe {
597
self.buffer.set(len, entity);
598
}
599
len += 1;
600
});
601
602
// Update length
603
let new_state = state.with_length(len);
604
// This is safe because `alloc` is not being called and `remote_alloc` checks that it is not disabled.
605
// We don't need to change the generation since this will change the length, which changes the value anyway.
606
// If, from a `remote_alloc` perspective, this does not change the length (i.e. this changes it *back* to what it was),
607
// then `alloc` must have been called, which changes the generation.
608
self.len.set_state_risky(new_state, Ordering::Release);
609
}
610
611
/// Allocates an [`Entity`] from the free list if one is available.
612
///
613
/// # Safety
614
///
615
/// There must be a clear, strict order between this call and calls to [`Self::free`].
616
/// Otherwise, the compiler will make unsound optimizations.
617
#[inline]
618
unsafe fn alloc(&self) -> Option<Entity> {
619
// SAFETY: This will get a valid index because caller ensures there is no way for `free` to be done at the same time.
620
// Relaxed is ok here since `free` is the only time memory is changed, and relaxed still gets the most recent state.
621
// The memory ordering to ensure we read the most recent value at the index is ensured by the caller.
622
let len = self.len.pop_for_state(1, Ordering::Relaxed).length();
623
let index = len.checked_sub(1)?;
624
625
// SAFETY: This was less then `len`, so it must have been `set` via `free` before.
626
// There is a strict memory ordering of this use of the index because the length is only decreasing.
627
// That means there is only one use of this index since the last call to `free`.
628
// The only time the length increases is during `free`, which the caller ensures has a "happened before" relationship with this call.
629
Some(unsafe { self.buffer.get(index) })
630
}
631
632
/// Allocates as many [`Entity`]s from the free list as are available, up to `count`.
633
///
634
/// # Safety
635
///
636
/// There must be a clear, strict order between this call and calls to [`Self::free`].
637
/// Otherwise, the compiler will make unsound optimizations.
638
///
639
/// Note that this allocation call doesn't end until the returned value is dropped.
640
/// So, calling [`Self::free`] while the returned value is live is unsound.
641
#[inline]
642
unsafe fn alloc_many(&self, count: u32) -> FreeBufferIterator<'_> {
643
// SAFETY: This will get a valid index because there is no way for `free` to be done at the same time.
644
// Relaxed is ok here since `free` is the only time memory is changed, and relaxed still gets the most recent state.
645
// The memory ordering to ensure we read the most recent value at the index is ensured by the caller.
646
let len = self.len.pop_for_state(count, Ordering::Relaxed).length();
647
let index = len.saturating_sub(count);
648
649
// SAFETY: The iterator's items are all less than the length, so they are in bounds and have been previously set.
650
// There is a strict memory ordering of this use of the indices because the length is only decreasing.
651
// That means there is only one use of these indices since the last call to `free`.
652
// The only time it the length increases is during `free`, which the caller ensures has a "happened before" relationship with this call.
653
unsafe { self.buffer.iter(index..len) }
654
}
655
656
/// Allocates an [`Entity`] from the free list if one is available and it is safe to do so.
657
#[inline]
658
fn remote_alloc(&self) -> Option<Entity> {
659
// The goal is the same as `alloc`, so what's the difference?
660
// `alloc` knows `free` is not being called, but this does not.
661
// What if we `len.fetch_sub(1)` but then `free` overwrites the entity before we could read it?
662
// That would mean we would leak an entity and give another entity out twice.
663
// We get around this by only updating `len` after the read is complete.
664
// But that means something else could be trying to allocate the same index!
665
// So we need a `len.compare_exchange` loop to ensure the index is unique.
666
// Because we keep a generation value in the `FreeCount`, if any of these things happen, we simply try again.
667
// We also need to prevent this from conflicting with a `free` call, so we check to ensure the state is not disabled.
668
669
// We keep track of the attempts so we can yield the thread on std after a few fails.
670
#[cfg(feature = "std")]
671
let mut attempts = 1u32;
672
// We need an acquire ordering to acquire the most recent memory of `free` calls.
673
let mut state = self.len.state(Ordering::Acquire);
674
loop {
675
// The state is only disabled when freeing.
676
// If a free is happening, we need to wait for the new entity to be ready on the free buffer.
677
// That means we will also need to re-fetch the state and acquire the new memory.
678
// Then, we can allocate it.
679
if state.is_disabled() {
680
// Spin 64 times before yielding.
681
#[cfg(feature = "std")]
682
{
683
attempts += 1;
684
if attempts.is_multiple_of(64) {
685
// scheduler probably isn't running the thread doing the `free` call, so yield so it can finish.
686
std::thread::yield_now();
687
} else {
688
core::hint::spin_loop();
689
}
690
}
691
692
#[cfg(not(feature = "std"))]
693
core::hint::spin_loop();
694
695
// Retry with the fresh state and acquired memory order.
696
state = self.len.state(Ordering::Acquire);
697
continue;
698
}
699
700
// At this point, we know a `free` was not happening when we started.
701
702
let len = state.length();
703
let index = len.checked_sub(1)?;
704
705
// SAFETY:
706
//
707
// If no `free` call has started, this safety follows the same logic as in non-remote `alloc`.
708
// That is, the len always counts down, so this is the only use of this index since the last `free`,
709
// and another `free` hasn't happened.
710
//
711
// But if a `free` did start at this point, it would be operating on indices greater than `index`.
712
// We haven't updated the `FreeCount` yet, so the `free` call would be adding to it, while we've been subtracting from it.
713
// That means this is still the only time this index is used since the last `free`!
714
// So, even though we can't guarantee when the concurrent `free` is happening in memory order, it doesn't matter since that `free` doesn't use this index.
715
// We can still establish a clear, strict ordering for this slot because 1) any concurrent `free` doesn't use this index and 2) we have an `Acquire` relationship with the `free` before it.
716
//
717
// So yeah, we could be reading from outdated memory (the free buffer), but the part that we are reading, hasn't changed, so that's ok.
718
// That satisfies safety but not correctness.
719
// We still need to double check that a free didn't happen, and retry if it did.
720
// Otherwise, this entity might be given out twice.
721
let entity = unsafe { self.buffer.get(index) };
722
723
let ideal_state = state.pop(1);
724
// If we fail, we need to acquire the new state.
725
// If we succeed, we are finished, and we haven't changed any memory, so we can used relaxed ordering.
726
match self
727
.len
728
.try_set_state(state, ideal_state, Ordering::Relaxed, Ordering::Acquire)
729
{
730
Ok(_) => return Some(entity),
731
Err(new_state) => state = new_state,
732
}
733
}
734
}
735
}
736
737
struct FreshAllocator {
738
/// The next value of [`Entity::index`] to give out if needed.
739
next_entity_index: AtomicU32,
740
}
741
742
impl FreshAllocator {
743
/// This exists because it may possibly change depending on platform.
744
/// Ex: We may want this to be smaller on 32 bit platforms at some point.
745
const MAX_ENTITIES: u32 = u32::MAX;
746
747
/// The total number of indices given out.
748
#[inline]
749
fn total_entity_indices(&self) -> u32 {
750
self.next_entity_index.load(Ordering::Relaxed)
751
}
752
753
/// This just panics.
754
/// It is included to help with branch prediction, and put the panic message in one spot.
755
#[cold]
756
#[inline]
757
fn on_overflow() -> ! {
758
panic!("too many entities")
759
}
760
761
/// Allocates a fresh [`EntityIndex`].
762
/// This row has never been given out before.
763
#[inline]
764
fn alloc(&self) -> Entity {
765
let index = self.next_entity_index.fetch_add(1, Ordering::Relaxed);
766
if index == Self::MAX_ENTITIES {
767
Self::on_overflow();
768
}
769
// SAFETY: We just checked that this was not max and we only added 1, so we can't have missed it.
770
Entity::from_index(unsafe { EntityIndex::new(NonMaxU32::new_unchecked(index)) })
771
}
772
773
/// Allocates `count` [`EntityIndex`]s.
774
/// These rows will be fresh.
775
/// They have never been given out before.
776
fn alloc_many(&self, count: u32) -> AllocUniqueEntityIndexIterator {
777
let start_new = self.next_entity_index.fetch_add(count, Ordering::Relaxed);
778
let new = match start_new
779
.checked_add(count)
780
.filter(|new| *new < Self::MAX_ENTITIES)
781
{
782
Some(new_next_entity_index) => start_new..new_next_entity_index,
783
None => Self::on_overflow(),
784
};
785
AllocUniqueEntityIndexIterator(new)
786
}
787
}
788
789
/// An [`Iterator`] returning a sequence of [`EntityIndex`] values from an [`Allocator`] that are never aliased.
790
/// These rows have never been given out before.
791
///
792
/// **NOTE:** Dropping will leak the remaining entity rows!
793
pub(super) struct AllocUniqueEntityIndexIterator(core::ops::Range<u32>);
794
795
impl Iterator for AllocUniqueEntityIndexIterator {
796
type Item = Entity;
797
798
#[inline]
799
fn next(&mut self) -> Option<Self::Item> {
800
self.0
801
.next()
802
// SAFETY: This came from an *exclusive* range. It can never be max.
803
.map(|idx| unsafe { EntityIndex::new(NonMaxU32::new_unchecked(idx)) })
804
.map(Entity::from_index)
805
}
806
807
#[inline]
808
fn size_hint(&self) -> (usize, Option<usize>) {
809
self.0.size_hint()
810
}
811
}
812
813
impl ExactSizeIterator for AllocUniqueEntityIndexIterator {}
814
impl core::iter::FusedIterator for AllocUniqueEntityIndexIterator {}
815
816
/// This stores allocation data shared by all entity allocators.
817
struct SharedAllocator {
818
/// The entities pending reuse
819
free: FreeList,
820
fresh: FreshAllocator,
821
/// Tracks whether or not the primary [`Allocator`] has been closed or not.
822
is_closed: AtomicBool,
823
}
824
825
impl SharedAllocator {
826
/// Constructs a [`SharedAllocator`]
827
fn new() -> Self {
828
Self {
829
free: FreeList::new(),
830
fresh: FreshAllocator {
831
next_entity_index: AtomicU32::new(0),
832
},
833
is_closed: AtomicBool::new(false),
834
}
835
}
836
837
/// Allocates a new [`Entity`], reusing a freed index if one exists.
838
///
839
/// # Safety
840
///
841
/// This must not conflict with [`FreeList::free`] calls.
842
#[inline]
843
unsafe fn alloc(&self) -> Entity {
844
// SAFETY: assured by caller
845
unsafe { self.free.alloc() }.unwrap_or_else(|| self.fresh.alloc())
846
}
847
848
/// Allocates a `count` [`Entity`]s, reusing freed indices if they exist.
849
///
850
/// # Safety
851
///
852
/// This must not conflict with [`FreeList::free`] calls for the duration of the iterator.
853
#[inline]
854
unsafe fn alloc_many(&self, count: u32) -> AllocEntitiesIterator<'_> {
855
// SAFETY: Ensured by caller.
856
let reused = unsafe { self.free.alloc_many(count) };
857
let still_need = count - reused.len() as u32;
858
let new = self.fresh.alloc_many(still_need);
859
AllocEntitiesIterator { new, reused }
860
}
861
862
/// Allocates a new [`Entity`].
863
/// This will only try to reuse a freed index if it is safe to do so.
864
#[inline]
865
fn remote_alloc(&self) -> Entity {
866
self.free
867
.remote_alloc()
868
.unwrap_or_else(|| self.fresh.alloc())
869
}
870
871
/// Marks the allocator as closed, but it will still function normally.
872
fn close(&self) {
873
self.is_closed.store(true, Ordering::Release);
874
}
875
876
/// Returns true if [`Self::close`] has been called.
877
fn is_closed(&self) -> bool {
878
self.is_closed.load(Ordering::Acquire)
879
}
880
}
881
882
/// This keeps track of freed entities and allows the allocation of new ones.
883
///
884
/// Note that this must not implement [`Clone`].
885
/// The allocator assumes that it is the only one with [`FreeList::free`] permissions.
886
/// If this were cloned, that assumption would be broken, leading to undefined behavior.
887
/// This is in contrast to the [`RemoteAllocator`], which may be cloned freely.
888
pub(super) struct Allocator {
889
/// The shared allocator state, which we share with any [`RemoteAllocator`]s.
890
shared: Arc<SharedAllocator>,
891
/// The local free list.
892
/// We use this to amortize the cost of freeing to the shared allocator since that is expensive.
893
local_free: Box<ArrayVec<Entity, 128>>,
894
}
895
896
impl Default for Allocator {
897
fn default() -> Self {
898
Self::new()
899
}
900
}
901
902
impl Allocator {
903
/// Constructs a new [`Allocator`]
904
pub(super) fn new() -> Self {
905
Self {
906
shared: Arc::new(SharedAllocator::new()),
907
local_free: Box::new(ArrayVec::new()),
908
}
909
}
910
911
/// Allocates a new [`Entity`], reusing a freed index if one exists.
912
#[inline]
913
pub(super) fn alloc(&self) -> Entity {
914
// SAFETY: violating safety requires a `&mut self` to exist, but rust does not allow that.
915
unsafe { self.shared.alloc() }
916
}
917
918
/// The total number of indices given out.
919
#[inline]
920
fn total_entity_indices(&self) -> u32 {
921
self.shared.fresh.total_entity_indices()
922
}
923
924
/// The number of free entities.
925
#[inline]
926
fn num_free(&self) -> u32 {
927
// RISK: `free` requires mutable access.
928
self.shared.free.num_free()
929
}
930
931
/// Flushes the [`local_free`](Self::local_free) list to the shared allocator.
932
#[inline]
933
fn flush_freed(&mut self) {
934
// SAFETY: We have `&mut self`.
935
unsafe {
936
self.shared.free.free(self.local_free.as_slice());
937
}
938
self.local_free.clear();
939
}
940
941
/// Frees the entity allowing it to be reused.
942
#[inline]
943
pub(super) fn free(&mut self, entity: Entity) {
944
if self.local_free.is_full() {
945
self.flush_freed();
946
}
947
// SAFETY: The `ArrayVec` is not full or has just been cleared.
948
unsafe {
949
self.local_free.push_unchecked(entity);
950
}
951
}
952
953
/// Allocates `count` entities in an iterator.
954
#[inline]
955
pub(super) fn alloc_many(&self, count: u32) -> AllocEntitiesIterator<'_> {
956
// SAFETY: `free` takes `&mut self`, and this lifetime is captured by the iterator.
957
unsafe { self.shared.alloc_many(count) }
958
}
959
960
/// Frees the entities allowing them to be reused.
961
#[inline]
962
pub(super) fn free_many(&mut self, entities: &[Entity]) {
963
if self.local_free.try_extend_from_slice(entities).is_err() {
964
// SAFETY: We have `&mut self`.
965
unsafe {
966
self.shared.free.free(entities);
967
}
968
}
969
}
970
}
971
972
impl Drop for Allocator {
973
fn drop(&mut self) {
974
self.shared.close();
975
}
976
}
977
978
impl core::fmt::Debug for Allocator {
979
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
980
f.debug_struct(core::any::type_name::<Self>())
981
.field("total_indices", &self.total_entity_indices())
982
.field("total_free", &self.num_free())
983
.finish()
984
}
985
}
986
987
/// An [`Iterator`] returning a sequence of [`Entity`] values from an [`Allocator`].
988
///
989
/// **NOTE:** Dropping will leak the remaining entities!
990
pub(super) struct AllocEntitiesIterator<'a> {
991
new: AllocUniqueEntityIndexIterator,
992
reused: FreeBufferIterator<'a>,
993
}
994
995
impl<'a> Iterator for AllocEntitiesIterator<'a> {
996
type Item = Entity;
997
998
fn next(&mut self) -> Option<Self::Item> {
999
self.reused.next().or_else(|| self.new.next())
1000
}
1001
1002
fn size_hint(&self) -> (usize, Option<usize>) {
1003
let len = self.reused.len() + self.new.len();
1004
(len, Some(len))
1005
}
1006
}
1007
1008
impl<'a> ExactSizeIterator for AllocEntitiesIterator<'a> {}
1009
impl<'a> core::iter::FusedIterator for AllocEntitiesIterator<'a> {}
1010
1011
// SAFETY: Newly reserved entity values are unique.
1012
unsafe impl EntitySetIterator for AllocEntitiesIterator<'_> {}
1013
1014
impl Drop for AllocEntitiesIterator<'_> {
1015
fn drop(&mut self) {
1016
let leaking = self.len();
1017
if leaking > 0 {
1018
warn!(
1019
"{} entities being leaked via unfinished `AllocEntitiesIterator`",
1020
leaking
1021
);
1022
}
1023
}
1024
}
1025
1026
/// This is a stripped down entity allocator that operates on fewer assumptions than [`EntityAllocator`](super::EntityAllocator).
1027
/// As a result, using this will be slower than than the main allocator but this offers additional freedoms.
1028
/// In particular, this type is fully owned, allowing you to allocate entities for a world without locking or holding reference to the world.
1029
/// This is especially useful in async contexts.
1030
#[derive(Clone)]
1031
pub struct RemoteAllocator {
1032
shared: Arc<SharedAllocator>,
1033
}
1034
1035
impl RemoteAllocator {
1036
/// Creates a new [`RemoteAllocator`] with the provided [`Allocator`] source.
1037
/// If the source is ever destroyed, [`Self::alloc`] will yield garbage values.
1038
/// Be sure to use [`Self::is_closed`] to determine if it is safe to use these entities.
1039
pub(super) fn new(source: &Allocator) -> Self {
1040
Self {
1041
shared: source.shared.clone(),
1042
}
1043
}
1044
1045
/// Returns whether or not this [`RemoteAllocator`] is connected to this source [`Allocator`].
1046
pub(super) fn is_connected_to(&self, source: &Allocator) -> bool {
1047
Arc::ptr_eq(&self.shared, &source.shared)
1048
}
1049
1050
/// Allocates an entity remotely.
1051
///
1052
/// This comes with a major downside:
1053
/// Because this does not hold reference to the world, the world may be cleared or destroyed before you get a chance to use the result.
1054
/// If that happens, these entities will be garbage!
1055
/// They will not be unique in the world anymore and you should not spawn them!
1056
/// Before using the returned values in the world, first check that it is ok with [`EntityAllocator::has_remote_allocator`](super::EntityAllocator::has_remote_allocator).
1057
#[inline]
1058
pub fn alloc(&self) -> Entity {
1059
self.shared.remote_alloc()
1060
}
1061
1062
/// Returns whether or not this [`RemoteAllocator`] is still connected to its source [`EntityAllocator`](super::EntityAllocator).
1063
///
1064
/// Note that this could close immediately after the function returns false, so be careful.
1065
/// The best way to ensure that does not happen is to only trust the returned value while holding a reference to the world
1066
/// and to ensure it is the right world through [`EntityAllocator::has_remote_allocator`](super::EntityAllocator::has_remote_allocator).
1067
///
1068
/// This is generally best used as a diagnostic.
1069
/// [`EntityAllocator::has_remote_allocator`](super::EntityAllocator::has_remote_allocator) is a better check for correctness.
1070
pub fn is_closed(&self) -> bool {
1071
self.shared.is_closed()
1072
}
1073
}
1074
1075
#[cfg(test)]
1076
mod tests {
1077
use super::*;
1078
use alloc::vec;
1079
1080
/// Ensure the total capacity of [`OwnedBuffer`] is `u32::MAX + 1`.
1081
#[test]
1082
fn chunk_capacity_sums() {
1083
let total: u64 = (0..FreeBuffer::NUM_CHUNKS)
1084
.map(FreeBuffer::capacity_of_chunk)
1085
.map(|x| x as u64)
1086
.sum();
1087
// The last 2 won't be used, but that's ok.
1088
// Keeping them powers of 2 makes things faster.
1089
let expected = u32::MAX as u64 + 1;
1090
assert_eq!(total, expected);
1091
}
1092
1093
/// Ensure [`OwnedBuffer`] can be properly indexed
1094
#[test]
1095
fn chunk_indexing() {
1096
let to_test = vec![
1097
(0, (0, 0, 512)), // index 0 cap = 512
1098
(1, (0, 1, 512)),
1099
(256, (0, 256, 512)),
1100
(511, (0, 511, 512)),
1101
(512, (1, 0, 512)), // index 1 cap = 512
1102
(1023, (1, 511, 512)),
1103
(1024, (2, 0, 1024)), // index 2 cap = 1024
1104
(1025, (2, 1, 1024)),
1105
(2047, (2, 1023, 1024)),
1106
(2048, (3, 0, 2048)), // index 3 cap = 2048
1107
(4095, (3, 2047, 2048)),
1108
(4096, (4, 0, 4096)), // index 3 cap = 4096
1109
];
1110
1111
for (input, output) in to_test {
1112
assert_eq!(FreeBuffer::index_info(input), output);
1113
}
1114
}
1115
1116
#[test]
1117
fn buffer_len_encoding() {
1118
let len = FreeCount::new_zero_len();
1119
assert_eq!(len.state(Ordering::Relaxed).length(), 0);
1120
assert_eq!(len.pop_for_state(200, Ordering::Relaxed).length(), 0);
1121
len.set_state_risky(
1122
FreeCountState::new_zero_len().with_length(5),
1123
Ordering::Relaxed,
1124
);
1125
assert_eq!(len.pop_for_state(2, Ordering::Relaxed).length(), 5);
1126
assert_eq!(len.pop_for_state(2, Ordering::Relaxed).length(), 3);
1127
assert_eq!(len.pop_for_state(2, Ordering::Relaxed).length(), 1);
1128
assert_eq!(len.pop_for_state(2, Ordering::Relaxed).length(), 0);
1129
}
1130
1131
#[test]
1132
fn uniqueness() {
1133
let mut entities = Vec::with_capacity(2000);
1134
let mut allocator = Allocator::new();
1135
entities.extend(allocator.alloc_many(1000));
1136
1137
let pre_len = entities.len();
1138
entities.dedup();
1139
assert_eq!(pre_len, entities.len());
1140
1141
for e in entities.drain(..) {
1142
allocator.free(e);
1143
}
1144
1145
entities.extend(allocator.alloc_many(500));
1146
for _ in 0..1000 {
1147
entities.push(allocator.alloc());
1148
}
1149
entities.extend(allocator.alloc_many(500));
1150
1151
let pre_len = entities.len();
1152
entities.dedup();
1153
assert_eq!(pre_len, entities.len());
1154
}
1155
1156
/// Bevy's allocator doesn't make guarantees about what order entities will be allocated in.
1157
/// This test just exists to make sure allocations don't step on each other's toes.
1158
#[test]
1159
fn allocation_order_correctness() {
1160
let mut allocator = Allocator::new();
1161
let e0 = allocator.alloc();
1162
let e1 = allocator.alloc();
1163
let e2 = allocator.alloc();
1164
let e3 = allocator.alloc();
1165
allocator.free(e0);
1166
allocator.free(e1);
1167
allocator.free(e2);
1168
allocator.free(e3);
1169
allocator.flush_freed();
1170
1171
let r0 = allocator.alloc();
1172
let mut many = allocator.alloc_many(2);
1173
let r1 = many.next().unwrap();
1174
let r2 = many.next().unwrap();
1175
assert!(many.next().is_none());
1176
drop(many);
1177
let r3 = allocator.alloc();
1178
1179
assert_eq!(r0, e3);
1180
assert_eq!(r1, e1);
1181
assert_eq!(r2, e2);
1182
assert_eq!(r3, e0);
1183
}
1184
}
1185
1186