anders429 / brood Goto Github PK
View Code? Open in Web Editor NEWA fast and flexible entity component system library.
License: Apache License 2.0
A fast and flexible entity component system library.
License: Apache License 2.0
This is an idea that has haunted me for a while. Originally, brood
was architected without archetype::Identifier
, with Archetype
being generic on the entity type. That only works if the entity type is "canonical", which requires some kind of canonicalization.
The original canonicalization I used didn't scale. It ended up 2^n code paths for each component, which obviously did not scale well compile-time wise. My small tests with 4 components worked fine, but when I expanded to the 27 components for some of the ECS bench suite tests, it quickly fell apart.
So I decided to use the run-time archetype identifier solution. It incurs a runtime cost of allocating for the archetype identifier, which increases linearly with the number of components, but it works well enough.
However, I think I now have a method of actually doing canonicalization at compile time, transforming any arbitrary entity into a canonical entity. It exists here at this playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=06ce568a5291035c8da980d887de8892
Huge shoutout to frunk
and Lloyd Chan's blog post about how frunk
was written. He was right, the hardest part of this stuff was getting the type signatures right.
The solution above uses the Registry as the canonical form, and uses each component to try to pull on a component in the entity. Using the Get
trait, which is generated for each entity for each of its components, we can pull the component out. If the Get trait isn't generated for the entity for that component, that means the component isn't in the entity, and so we can safely skip it. Through crazy type shenanigans, we can effectively encode this for the compiler to understand. Because the invalid cases of pulling an entity that doesn't exist can't be generated at compile time, and therefore are never attempted, the end result is the compiler generating the code quickly, even for large amounts of components.
To use this in the library will require a good amount of footwork, but in the end it will simplify quite a bit. This should allow us to rip out archetype identifiers, possibly rip out component maps, and make insertions much faster. Additionally, this canonicalization can be applied to views as well, which means queries should be much faster too.
It doesn't make any sense to query for entities not contained in the World
's Registry
. Currently, querying just silently fails if a viewed component isn't contained in the Registry
. This incurs a runtime cost at query time.
Ideally, it would be good to emit an error, and if possible, move the check to compile time since the query and registry are both known at compile time. Emitting an error is possible in the short run, but the compile-time check is something that will have to come later (see #49 for more details on that effort). This issue will be tracking the effort to emit an error.
Needs to be written.
Similar to how system::Null
is not able to be instantiated, a Registry
should also not be able to be instantiated. This will prevent confusion between Registry
and Entity
heterogeneous lists. Registry
s should only be used to register components for a World
to be able to store.
Annotating feature-gated items this way will make it much more obvious which items are contained to which features.
Happened on #81. miri
failed, but succeeded on a rerun. It's worth investigating, as it means there could be some edge case undefined behavior being hit that miri
can't catch all the time. From the logs, it seems to be originating in the Scheduler
logic.
The World
's entry API currently only supports adding and removing components. Some other methods to consider adding are things like checking to see if a component exists or querying the entity directly.
The trait ExactSizeIterator
does not guarantee the size is correct (see the docs). Therefore, the usage within EntityAllocator::allocate_batch()
is technically incorrect, since the length is assumed to be correct by the unsafe code. In practice, this doesn't seem to provide any issues since Range
does implement the unstable TrustedLen
trait, and this is the only type ever passed to the method, but it still opens up potential unsoundness in the future if anything is ever written that doesn't actually meet these requirements.
A potential solution would be to just change the type signature of the allocate_batch
method to take the exact iterator we pass to it. I wish the TrustedLen
trait could just be used, but its unstable and who knows when it will be stabilized.
There are a few methods that are common across many standard collections that could be implemented:
len
/ is_empty
contains
clear
reserve
shrink_to_fit
While ecs_bench_suite
is wonderful, there are a few other things I'd like to bench, such as push()
, entry()
, deletions, etc. It would therefore be helpful to have some benchmarks within the project itself.
It may be possible to generically implement Extend
for World
in a generic way that allows for extending from any iterator that yields values implementing Entity
. If this is found to be possible, IntoIterator
could also be implemented for Batch<Entities>
(which still needs to be renamed, by the way), allowing extend
to work with that struct
.
One main issue is that our current extend
method returns EntityIdentifier
s corresponding to the inserted values. This would not be possible with the Extend
trait. At the very least, however, Batch<Entities>
should be made to implement IntoIterator
, and also should work with inputs besides Vec
.
Support should be added for parallel iteration using the rayon
crate (which seems to be the standard, at least as far as ECS libraries go).
A World::par_query()
method should be introduced that returns a rayon::ParallelIterator
struct. Design of the iterator will likely be similar to how the official implementation for ParallelIterator
is for Flatten
. See the relevant source code here.
Came across this today. With the trailing comma, it compiles just fine, but without the comma it seems to not be resolving the macro properly, complaining about a @
character being present.
Similar to how Vec<T>
implements Deref<Target = [T]>
, an implementation on Deref
may be beneficial. archetype::Identifier
and archetype::IdentifierRef
have the same relationship as Vec<T>
and [T]
, or String
and str
, and the same benefits would result. Rather than having a bunch of duplicate methods, the methods could just be implemented on archetype::IdentifierRef
.
See also the Collections are smart pointers article in Rust Design Patterns. In this case, archetype::Identifier
is a collection of set components.
Not sure why, but it apparently hasn't run successfully for the last 5 days. Looks like the error is Failed to load coverage: Truncated coverage data
. Will need further investigation (although it's not a priority right now, since the tests aren't expected to fully cover the codebase at this point anyway).
The query module is kind of a mess right now, with stuff just dumped in there. The following changes should be made:
view
and filter
submodules.views!
macro so users don't have to deal with heterogeneous list boilerplate.Not sure if the code correctly considers the case of multiple of the same Component
within the registry. There might be some unsoundness that could occur if this isn't handled correctly.
Currently Archetype
s are stored in a HashMap
keyed by an Identifier
, where the Archetype
itself owns the IdentifierBuffer
. This works fine, except for the danger involved with the Identifier
needing to outlive its IdentifierBuffer
. Right now everything is sound, but it would be easy to lose the soundness accidentally.
This stackoverflow answer suggests using a HashSet
instead, and defining the Hash
and Eq
traits to be based on the IdentifierBuffer
stored directly in the Archetype
. It would make accessing the Archetype
a bit easier, since Borrow
could also be implemented to allow using an Identifier
on lookup still. This way, Identifier
s used elsewhere could be associated with their appropriate lifetime, since the issue with the key-value relationship is no longer a problem.
When an entity has a component added or removed, the entity is currently removed from its archetype table and inserted into another one. Unfortunately, this currently causes all of the values within the table after the removed one to be shifted, which leads to problems if those entities are later attempted to be looked up in the entity allocator: those entities' indexes no longer point to the correct position.
Since World
is an unordered collection, insert
makes more sense for a method name than push
. push
implies the entity being pushed on top of the other entities, which is not what is happening.
Similarly, we could consider renaming extend
to something like insert_batch
? I'm not sure what I think about this one. HashSet
still uses extend
, but that's because it implements the Extend
trait. I think extend
still makes sense for an unordered collection, as the meaning is to allow "you to extend a collection by including the contents of that iterator", according to the Extend
docs.
The codebase is currently split between public
and internal
modules. However, this leads to nearly every module of the project being duplicated in both public
and internal
, which can make things really confusing, especially when importing items.
Removing this split might make things complicated as far as "sealed" code goes (code marked as pub
but never publicly exposed). It also would involve quite a bit of adjustments to use
statements. However, it would come with the benefit of everything being exactly where you expect it to be. Plus, users who arrive at the source code from the docs may be less surprised with what they find. On that same note, it may be less confusing the combine all impl
s of traits into a single module, although I'm still a bit undecided on this.
Consider adding a Runnable
trait implemented on System
s, ParSystem
s, and Schedule
s. Then, World::run()
could operate on a Runnable
generic type, allowing the method to be reused for more than just Schedule
s.
This will also allow System
s to be usable without the parallel
feature enabled.
This crate is getting closer and closer to being ready for a 0.1.0
release, but there are still a few things that need to be done. Ideally, these tasks can be completed in the next couple of months.
unsafe
item has a # Safety
section.# Panic
section.trybuild
).registry::NullRegistry
vs registry::Null
).query
module.cargo check
and cargo clippy
clippy::pedantic
.clippy
lints on CI.unsafe
blocks, annotating why they are sound with a SAFETY
comment.stable
channel of Rust.
1.58.0
. However, this release should come in, like, the next week.It's really common within the internal code to see a slice version of an archetype::Identifier
be passed to a heterogeneous list function, along with relevant bits and indices for finding the specific component's bit. The bits and indices are then incremented, leading to a lot of duplicated code.
A better solution is to create an iterator that holds this context and yields booleans indicating whether the component is identified or not. This will significantly reduce duplicated code within the internal::registry
submodules.
This is a tracking issue for complete unit testing of the code base. As modules have unit tests written for them, they will be checked off the following list.
From this point forward, all pull requests should include unit tests for everything added. This wasn't included in the past due to the experimental nature of things, but now that this is shaping into a real library tests need to be included from the get-go.
We should check that cargo doc
runs successfully in CI, promoting all warnings to errors using the RUSTDOCFLAGS='-D warnings'
env variable (see this issue for discussion on using this env variable). Ideally, we should also attempt to build documentation in as close to the same way as what docs.rs will be doing. This should hopefully prevent having to republish due to failures occurring in docs.rs generating documentation.
Related to #59 in that it's another collection method.
The idea is to add a method named drain
that returns an iterator over the queried entities, removing them from the World
at the same time. The signature of the method would be the same as that of query
, with views and a filter.
Care would need to be taken to remove the entities from the entity allocator as well. The drain can also return entity identifiers, but they will not be valid anymore, which will be indicated by their generations. The documentation should note this.
The returned iterator could have a reference to the actual archetypes in-memory to facilitate the iteration, similar to how, say, HashSet
's drain
method works.
Note that the returned entities shouldn't be references, but could instead be the values directly moved, since they will no longer exists in their archetypes after iteration.
Tracking issue for the self safety audit ongoing for the code base.
The goal of this is to ensure that every usage of an unsafe
code block is ensured to fulfill safety contracts and actually be sound code. Additionally, every unsafe fn
defined needs to establish a safety contract.
Progress of the audit is tracked on a module basis below:
Internal code defining functions and methods on heterogeneous lists is a bit repetitive, redefining the functions and methods 3 separate times. We can consider combining that definition into an internal macro_rules
macro.
Consider adding some kind of ComponentMap<R>
wrapper around a HashMap
, mapping from component type to index within the registry R
.
This will limit the safety contract to only having to be upheld when the component map is created, rather than having to be upheld every time it is used, which is the current state of things. This can significantly reduce the possibility of error, since it will reduce the scope of the safety contracts that need to be upheld internally.
cargo-mutants
looks like a good crate to provide mutation testing. It would be a good one to add to CI (and probably should be added to the list in #9).
Currently, Schedule
stages are defined at creation by simply partitioning based off of mutable and immutable claims. However, this is not optimal for, say, the schedule benchmark in the ecs_bench_suite
, which borrows (A,B), (C,D), and (C,E) mutably for each system. The current implementation divides this into two schedules, but it doesn't have to be. The input doesn't actually have any tables with all three of the C, D, and E components.
It's clear when looking at the example that we could run all three systems in parallel. However, making generic rules for this is trickier. We need to keep a few things in mind when proceeding:
Therefore, I think something like the following might work, at an extremely high level: for each system in the schedule, find all tables that can be executed during the current stage (meaning they aren't overlapping with a previous borrow from another stage in the schedule). The system should be executed over those tables during this stage, since we know there is no pending work on them that must complete before they can be run right now. We then proceed to the next stage and again attempt to run the systems, in order, over the remaining tables. We do this until there are no tables remaining.
This shifts the execution scheduling from being system-based to being table-based. I believe this should allow for faster execution, as it parallelizes as much computation as possible. In the ecs_bench_suite
example, it would run all three systems at once, which should cut down execution time significantly.
I think it will be worth it, because the overhead introduced in scheduling dynamically should be minimal compared to the actual iteration. In most cases where Schedule
s are actually needed, this should result in more performance gains, as the overhead is nothing compared to the saved iteration time.
More specific details will need to be determined still, specifically with how mutable and immutable borrows will differ, as well as how the schedule will keep track of which tables have been modified. Additionally, consideration will need to be made for how to deal with post-processing of systems, and whether that needs to be reworked.
There are some checks that can be evaluated at compile time, rather than runtime. These are:
These are all programmer errors, and should be made known as soon as possible.
These checks can be moved to compile-time using const fn
s. For example, a check could be run as follows: const _: () = some_check()
. However, all of these cases will currently require unstable nightly features. Therefore, I think it best to introduce a unstable_const_checks
feature that requires nightly
, allowing users to opt-in to these checks instead of the run-time checks.
serde
provides a method to check whether a Serializer
and Deserializer
are human-readable. If they are, the serialization should account for that. I think it would be best for human-readable serialization to be row-based, while the non-human-readable serialization can remain column-based.
Putting this here so I don't forget, because I don't want #44 to be blocked on this.
I've added two allowances of this lint in registry/seal/storage.rs
and registry/debug.rs
. These should be removed when the fix for rust-lang/rust-clippy#8366 is available in the nightly
channel (as of now, it is not yet available).
Currently, these macros accept any types and put them into heterogeneous lists. This is fine, and functions well enough, but users are not properly informed if the components are not actually components (i.e. don't actually implement Any
, which is the only requirement).
The following code block compiles:
use brood::entity;
// Assuming `A` is some struct that does not implement `Component`.
let entity = entity!(A);
Ideally, it would actually error on creation, since that technically isn't an Entity
as defined within this library:
error[[E0277]](https://doc.rust-lang.org/stable/error-index.html#E0277): the trait bound `A: Component` is not satisfied
let entity = entity!(A);
-------------------- ^ the trait `Component` is not implemented for `A`
Or something similar to that. This could be tested using trybuild
. The same holds for the other macros mentioned in the title: they should error if the output value is not actually correct.
It's a common pattern throughout all of the Registry
trait functions to pass component columns along with a length, and in each method those raw values are used to create a Vec<C>
for each component C specified by the archetype::Identifier
. This is fine, but we end up with a lot of unsafe code being repeated in every single method. For example, in src/registry/seal/storage.rs
, Vec::<C>::from_raw_parts
is called 8 times, which is in 2/3rds of the functions provided in that module.
A component::Columns
abstraction might help here. Rather than having to deal with the raw parts directly and risk mishandling them, the pointers, capacities, and shared len
could be placed in a single struct that could handle indexing into it, iterating over it, getting entity::Identifier
s, etc. When iterating, a component::Column
type could be returned which allows for unsafe casting to a Vec<C>
by simply providing the type to a method.
Some investigation will need to be done to ensure this actually will reduce complexity of the code. It may be difficult to actually reduce the complexity, and we don't want to make things more complicated by introducing more modules needlessly.
The ahash
library is the default library used for hashing by the hashbrown
library. It has worked well so far, but it does bring along a lot of extra dependencies for randomness to protect against HashDoS attacks.
With the library as it is, I'm not sure that there is any need for the extra protection against HashDoS attacks. There is a note on the wikipedia entry for HashDoS that says "Non-keyed "simple" hashes remain safe to use as long as the application's hash table is not controllable from the outside." Therefore, I'd like to examine two things:
If possible, I would like to be able to remove the ahash
dependency entirely and use something simpler. ahash
version 0.8
made some changes that brought this to my attention: rather than being able to readily use it with no features enabled, I have to choose whether to enable compile-time-rng
, runtime-rng
, or neither, with the features chosen having an affect on the usability on certain targets. For example, I can't enable runtime-rng
and expect it to work on no-std
targets. Rather than getting into the game of trying to guess which features to enable at minimum and keeping up with that, while also recommending to users what features they should enable themselves, I'd rather just use the simplest solution. If HashDoS resistance isn't needed, then there is no need for all of the extra work that comes with it.
Seems the valgrind
workflow is broken. I believe it has to do with some changes made recently that installed a more recent version from source. The way I did it probably wasn't actually guaranteed to work with each update of the binary.
We may have to downgrade the version used for CI if there is no easy workaround to make it reliable. I would rather not have to fix the valgrind path every few weeks.
I'm not in love with the use of "Allocator" in this context. It conflicts with the other regular usage of the word "allocator" in relation to heap allocations, which I imagine would only be confusing for someone looking at this library's private API with fresh eyes.
The function of the entity::Allocator
is to store the locations of entities (meaning their current archetype and index) using a generational index system. allocate()
and allocate_batch()
find a free location slot and register the entity's location there, while the other methods simply return the current location or modify that location.
So perhaps entity::Index
or entity::Map
would be better? Or entity::LocationMap
? "Map" denotes key-value, and the current system is just a bunch of locations keyed by generational index (entity::Identifier->
Location`).
The tests/trybuild/entities/length_not_integer.rs
test fails on current nightly
rustc
:
EXPECTED:
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
error[E0308]: mismatched types
--> tests/trybuild/entities/length_not_integer.rs:10:38
|
10 | let entities = entities!((A, B); 1.5);
| ^^^ expected `usize`, found floating-point number
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
ACTUAL OUTPUT:
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
error[E0308]: mismatched types
--> tests/trybuild/entities/length_not_integer.rs:10:38
|
10 | let entities = entities!((A, B); 1.5);
| ------------------^^^-
| | |
| | expected `usize`, found floating-point number
| arguments to this function are incorrect
|
note: function defined here
--> $RUST/alloc/src/vec/mod.rs
|
| pub fn from_elem<T: Clone>(elem: T, n: usize) -> Vec<T> {
| ^^^^^^^^^
error[E0308]: mismatched types
--> tests/trybuild/entities/length_not_integer.rs:10:38
|
10 | let entities = entities!((A, B); 1.5);
| ------------------^^^-
| | |
| | expected `usize`, found floating-point number
| arguments to this function are incorrect
|
note: function defined here
--> $RUST/alloc/src/vec/mod.rs
|
| pub fn from_elem<T: Clone>(elem: T, n: usize) -> Vec<T> {
| ^^^^^^^^^
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
This is caused by a recent change that is currently on nightly. The trybuild
expected output should be changed to match this format.
There is potential unsoundness in the query methods of World
. If the provided Views
contains both a mutable reference to a component and a second (mutable or immutable) reference to the same component, the returned result will both mutably and immutably (or possibly mutably again) reference that component on each entity, resulting in undefined behavior.
This is a tricky one to fix, although I have an idea. We can determine whether the view is valid at compile time, since the view is static. If we wrap the Views
in a wrapper struct, we can associate a const
with the struct whose evaluation panics if the invariant is not met, similar to what is described in this reddit post. The static_assertions
crate may also be useful for this, though I haven't looked too closely. The one trick is that the view isn't actually instantiated, its just passed as a generic, and so the constant needs to be used at some point to actually be compiled. Tests will need to be included to ensure that attempting to break Rust's mutable referencing rules doesn't compile.
Entry::add()
and Entry::remove()
both currently assume the components specified exist within the Registry
, when that can't actually be assumed. Rather than doing an unwrap_unchecked
on the component_index
at the beginning of both methods, a regular unwrap
should be done to trigger a panic if the component is not part of the Registry
.
In the future, hopefully we can move this check to compile-time. Ideally, we'd be able to check if a component is a part of a Registry
using a const fn
, but that is currently only possible in nightly Rust. Perhaps it may be worthwhile to introduce a nightly
feature gate to allow const checks.
CI should be set up through GitHub Actions. Specifically, CI should test the following:
no_std
environmentAs far as rustc versions go, the code will currently only build on nightly
and beta
. It should also build on 1.58.0 stable when it is released in a few weeks.
I believe there are a few cases where this would help when a macro references itself.
As the registry size grows, the build time of a project grows exponentially. This is due to the recursive methods to build an Archetype's Entity Type definition. The way it is set up, each path splits into two possibilities: having the component in the key, or not having the component. This creates two paths the compiler has to keep track of. Nest that many times deep (such as, for example, the frag_iter ECS benchmark, with 27 layers), and you get a heck of a lot of things for the compiler to handle. Unfortunately, it just won't scale and the compiler can't even compile it without running out of memory.
This is due to giving each Archetype a generic type at compile time, and encoding those types into their appropriate key. I can't think of any way to keep that generic without having this split of paths, but I think that removing this generic is actually possible without any serious benchmark hits.
Rather than building the type from the key, the Archetype itself can just store a reference to the key. Then, when it comes to visiting the components, it can iterate over the key and pull out the columns as their appropriate types. This shouldn't be any problem, since we are already iterating over the key on insertions anyway to construct the type. This just moves that iteration inside the Archetype, with the benefit of no path splitting.
Instead, the Archetype will need to have a Registry generic, so it can have somewhere to get the Component types from. This will also ensure that Archetypes aren't accidentally shared across Worlds somehow, and that the types returned are exactly what we expect.
I'll hack at this a bit this weekend to see if there are any situations where this won't work. I don't think it will cause any issue with query Views, since the type is known on both ends in that case. I'll also run some benchmarks to see if there are regressions on the insertions.
Tracking issue for fully documenting all items in the library.
"All items" includes everything, both public and private. Up to this point, everything that is public has been documented. However, private items are not documented. This issue essentially just tracks this private item documentation.
Note that private item documentation is a goal because it makes maintainability and extensibility of the library that much easier.
As modules are completely documented, they should be checked off below:
With World
now built out to a usable state, its time to turn away from the Entity and Component aspects of ECS and focus on the System aspect. Going forward, I want to be careful to keep the simplicity that is currently present within the library.
The main goal here is to introduce the ability to run queries in parallel. Currently, a single query can currently be run at a time, with the possibility to parallelize each entity's iteration using the rayon
crate. But if queries have non-overlapping views, they should also be able to be run in parallel at the query level. This can be achieved using a Dispatcher
that contains any number of System
s, where System
is a trait, Dispatcher
is a struct, and Dispatcher
also implements System
(to allow for reuse of methods).
Two query's views are defined as non-overlapping if any mutable views (&mut T
or Option<&mut t>
) within one query are unique with respect to the other query, meaning the other query has no views, mutable or immutable, on that same component. This is modeled after the referencing rules of the language to ensure everything remains sound and predictable.
I think two types of System
should exist: System
and ParSystem
, where System
operates over a query::result::Iter
and ParSystem
operates over a query::result::ParIter
. This allows for a combination of outer and inner parallelism, if desired.
Having System
be defined as a trait is also advantageous, since it means it can be implemented over Fn(...)
with the correct parameters (to be established later), as well as implemented by users on custom structs, which allows providing state to a system. A Dispatcher
can also implement System
, allowing it to be used in the sample places any other System
can be used. (Actually, after thinking about this more, it might be an anti-pattern. For example, how would the queries be provided to a Dispatcher
in the same way it would for a System
? It doesn't make much sense, because it will be operating over many queries, not just one.)
A method will be added to World
, perhaps called execute()
(and par_execute()
), which will take a sytem, execute the query, and call the system with it.
A System
will accept the query result iterator, as well as some external context to allow access to other things it may need. For example, systems will often want to insert entities into the World
. This can't be done during the iteration, and therefore must happen outside of the execution of the system. However, I must stress that I don't want this to turn into a crazy hodge-podge of features stacked on top of systems. This is meant to be kept as simple as possible, and fulfill its simplest purpose. This also could be omitted from the parameters altogether, and instead a separate, optional trait method, called world_post_processing
, would be added that would allow the system to modify the World
after running.
As far as things like events that some of the other ECS systems are doing go, I prefer to not include those at all if possible. I don't think it fits with the actual core concept of an ECS container. ECS containers are meant to hold the components, represent the relationship between those components as a conceptual entity, and allow operations upon those components through systems. If anything further is to be added on top of that, it should be done externally. The System
trait design proposed here allows structs to be used that can contain fields referencing whatever external systems the user may want to use in conjunction with the library. Simplicity is the name of the game here.
One final note: I'm not sure if it's reasonable to include a non-parallel dispatcher. All it would be is a wrapper around a list of systems that are all executed sequentially, essentially being useless, and it may just serve to confuse people who think they're getting parallelism when they are actually not. Perhaps a Dispatcher
and ParDispatcher
should exist to hold System
s and ParSystem
s respectively, but only be made available when the rayon
feature is enabled. That also brings up the question of whether System
s add anything when not used in the context of dispatching them in parallel, and so perhaps System
and ParSystem
should also be feature-gated. In fact, a System
without the context of a Dispatcher
is basically the same thing as a World::query()
, and I hate having two things that both perform the same funciton, which is why I have been hesitant to go forward with systems thus far.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.