Implement opt-in change indexes for dense components.#23519
Draft
pcwalton wants to merge 1 commit intobevyengine:mainfrom
Draft
Implement opt-in change indexes for dense components.#23519pcwalton wants to merge 1 commit intobevyengine:mainfrom
pcwalton wants to merge 1 commit intobevyengine:mainfrom
Conversation
Contributor
ElliottjPierce
left a comment
There was a problem hiding this comment.
I want to come back to this and do a full review later, but here's some quick thoughts:
- This needs a lot more docs to explain what the structure of this even is. I'll do more review when there's more here. Trying to put this together, I think what's going on here is: In addition to tracking changes for each component value, track changes for blocks/"pages" of entities in each table. There are
PagesSizeentities in each block and they all share the same world tick. For things that are changed often, this makes mutations slower. But for very rarely changed things, this means we can skip large sections of entities if their shared change tick is old. Am I getting that right? - We are going to need more docs and examples to motivate this for users. I'd love to see some benchmark results.
- How does this perform for entities that rarely have component values changed but are frequently moved between tables? How much does this hurt spawning performance, inserts, and such? Probably well worth the cost, but still...
- This makes
Mut8 bytes larger IIUC. This is probably the most concerning thing for me. This is still probably worth it, but this is going to hurt in some places if I had to guess. - This will probably improve performance for the average user. But, it will also probably make it worse for others, depending on how often they are changing things. I think it would be cool (but probably not worth trying yet) if users could customize the page size more. Maybe per component and the table just takes the larges, IDK. The more rarely a component is changed, the bigger its page size should be. Maybe even have a tool that can watch the app run and suggest ideal page sizes. Could be interesting.
- I'd like to point out that this improves the theoretical "normal" case but it also makes the theoretical worst case worse. If exactly one entity in each page is changed, even from a different component, it will make performance worse. For example, in a game with 10 rarely changed components using this new indexing scheme, while each one of those 10 is rarely mutated, it's probably pretty common for one of them to be mutated on an entity with all 10. On the whole, this technically makes querying less efficient the more components an entity has, which is not ideal. But it's probably not a huge issue in practice. This could be fixed by moving this indexing scheme to the columns, but that may have other drawbacks. Thoughts on this?
- We need better names here than
DefaultandIndexed. MaybeIndividual, andPessimisticallyPaged, and later we could addNone? IDK, but "indexed" isn't very informative IMO, but this is a small thing.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This summary (like the rest of the PR) is a work in progress.
Overview
Currently, for queries that use
Addedand/orChangedquery filters, the Bevy ECS must examine every component of every entity that matches the archetypes in question. Because core systems like rendering, transforms, and visibility calculation rely heavily onAdded/Changedquery filters, this adds up to a significant bottleneck when scaling to millions of entities. With the significant effort in 0.19 to scale to mega-worlds (1 million entities or more), the performance ofChangedhas become the largest blocker to achieving high scalability. The goal is to be competitive with Unity DOTS and its megacity demo, which has approximately 4.5 million mesh instances and modifies about 5,000 transforms per frame; without some method of acceleratingAddedandChanged, as for example in this PR, I don't believe this is feasible for Bevy to achieve.To solve this issue, this commit adds change indexes, which are an opt-in acceleration method for dense components. Change indexes introduce a table of summaries of each page of rows within a table. The number of consecutive rows that constitute a page is known as the page size, and, through measurement, I found 256 to be a reasonable conservative value. Each summary consists of the most recent change tick for all the indexed components within that archetype. When iterating through a query (either sequentially or in parallel), if an indexed component C cannot match unless
Added<C>orChanged<C>is true, then the query engine uses the summary to skip entire pages' worth of entities.Adding the
#[component(change = "indexed")]attribute to a component enables indexing for that component. Because indexing adds overhead toMut<T>among other operations, indexing is opt-in instead of opt-out. It's possible to determine statically, at compile time, whether a component is indexed, and the plan to ensure thatMut<T>doesn't regress relies on this.Alternate approaches
There are several alternate approaches that I experimented with. My experience with each one was as follows:
Per-column change indexes
My initial attempt stored change indexes on each column rather than on each archetype. This provided more specificity: the query acceleration could take into account only the change ticks for the components in the query filter rather than all indexed components on the archetype. The downside was that it severely impacted the performance of
extract_meshes_for_gpu_building, which has the following query:This is 14 different components that had to be checked and is responsible for one of the bottlenecks. In fact, being able to consolidate all of these components into a single check is one of the major motivations for change indexes to begin with.
Per-archetype change indexes
I also experimented with change indexes stored on the archetype instead of on the table. The advantage of storing the index on the archetype would be that sparse sets and tables are handled identically. Unfortunately, this ballooned complexity quite a bit and led to a lot of incorrect behavior. The biggest sticking point that I could see was that, in order to produce a
Mut<T>with a pointer to the change index, a pointer to the change index needs to be stored in theFetch. But that's incompatible with how query iteration for dense components works: for dense components, queries iterate over tables, not over components.Benchmarks
many_cubesMy primary interest is in scaling to worlds with millions of entities. A pure benchmark of scalability in this area is
many_cubes --instance-count 4000000 --no-cpu-culling. (Four million cubes is the maximum before the transform-and-cull shader runs intowgpuworkgroup limits, and CPU culling must be disabled in order to meaningfully scale to that level.) The results are as follows:many_cubes --instance-count 4000000 --no-cpu-culling,main:19.34 median ms/frame, 52 FPS
many_cubes --instance-count 4000000 --no-cpu-culling, this PR:14.49 median ms/frame, 69 FPS
The

extract_mesh_materialssystem, the bottleneck during the extraction phase, goes from median 4.58 ms/frame to 0.0238 ms/frame, a 192x speedup:(Please note that
batch_and_prepare_binned_render_phase,write_work_item_buffers, andwrite_indirect_parameters_buffersare all addressed by #23481 and followups to it, so the overall speedups from change indexes won't be limited by Amdahl's Law the way they are now.)bevy_cityIn
bevy_city, 12,442 entities out of 46,717 change every frame. This is not a workload that change indexes significantly improve, because the time spent actually doing the work that must happen on change dwarfs the time spent checking the filter for static meshes. Nevertheless, it's useful to show that change indexes don't regressbevy_city. Note thatbevy_cityis GPU bound, so the total frame times don't really indicate anything related to this PR.bevy_citywith no CPU culling on meshes,main:Median frame time 26.9 ms (37 FPS)
bevy_citywith no CPU culling on meshes, this PR:Median frame time 27.8 ms (36 FPS)
extract_meshes_for_gpu_buildingcomparison between this PR (yellow) andmain(red). Median time is 2.03 ms in both cases.Addition and removal
mainadd_remove/tableadd_remove/sparse_setadd_remove_big/tableadd_remove_big/sparse_setadd_remove_very_big/tableChange detection
mainall_added_detection/5000_entities_ecs::change_detection::Tableall_added_detection/5000_entities_ecs::change_detection::Sparseall_added_detection/50000_entities_ecs::change_detection::Tableall_added_detection/50000_entities_ecs::change_detection::Sparseall_changed_detection/5000_entities_ecs::change_detection::Tableall_changed_detection/5000_entities_ecs::change_detection::Sparseall_changed_detection/50000_entities_ecs::change_detection::Tableall_changed_detection/50000_entities_ecs::change_detection::Sparsefew_changed_detection/5000_entities_ecs::change_detection::Tablefew_changed_detection/5000_entities_ecs::change_detection::Sparsefew_changed_detection/50000_entities_ecs::change_detection::Tablefew_changed_detection/50000_entities_ecs::change_detection::Sparsenone_changed_detection/5000_entities_ecs::change_detection::Tablenone_changed_detection/5000_entities_ecs::change_detection::Sparsenone_changed_detection/50000_entities_ecs::change_detection::Tablenone_changed_detection/50000_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/5_archetypes_10_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/5_archetypes_10_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/5_archetypes_100_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/5_archetypes_100_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/5_archetypes_1000_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/5_archetypes_1000_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/5_archetypes_10000_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/5_archetypes_10000_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/20_archetypes_10_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/20_archetypes_10_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/20_archetypes_100_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/20_archetypes_100_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/20_archetypes_1000_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/20_archetypes_1000_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/20_archetypes_10000_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/20_archetypes_10000_entities_ecs::change_detection::Spars...multiple_archetypes_none_changed_detection/100_archetypes_10_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/100_archetypes_10_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/100_archetypes_100_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/100_archetypes_100_entities_ecs::change_detection::Sparsemultiple_archetypes_none_changed_detection/100_archetypes_1000_entities_ecs::change_detection::Tablemultiple_archetypes_none_changed_detection/100_archetypes_1000_entities_ecs::change_detection::Spars...multiple_archetypes_none_changed_detection/100_archetypes_10000_entities_ecs::change_detection::Tabl...multiple_archetypes_none_changed_detection/100_archetypes_10000_entities_ecs::change_detection::Spar...Future work
These benchmark numbers shouldn't be considered the upper limit of what is possible with change indexes. The remaining systems in
many_cubes, for instance, could probably see large improvements with additional work. For instance:Systems such as
visibility::calculate_boundsandmark_meshes_as_changed_if_their_materials_changedaren't currently eligible to use change indexes because they useAssetChanged, which must perform a full table scan. However, by introducing a resource that stores a bidirectional index betweenMeshandMaterialassets and the entities that use them, theAssetChangedquery filter could be dropped, and these systems could be migrated to only useAdded/Changed, making them eligible for change indexes.Some systems such as
reset_view_visibilitycould be migrated to use change indexes and be eliminated from the profile.Ultimately, the goal is for the CPU time to approach zero for meshes that don't change from frame to frame, and to have efficient handling for meshes that do.