bevyengine / bevy Goto Github PK
View Code? Open in Web Editor NEWA refreshingly simple data-driven game engine built in Rust
Home Page: https://bevyengine.org
License: Apache License 2.0
A refreshingly simple data-driven game engine built in Rust
Home Page: https://bevyengine.org
License: Apache License 2.0
std::any::TypeId
is not stable across binaries. This makes it unsuitable for use in scene files, networking, or dynamic plugins.
In the short term we get around this by just using std::any::type_name
. But this isn't particularly efficient for certain serialization cases and Eq
/ Hash
traits. Legion uses TypeIds extensively and we've (in the short term) replaced those with type_name
to allow for dynamic plugin loading. But that probably incurs measurable overhead.
Its worth exploring the idea of a StableTypeId
, which is just a wrapped integer. I think it makes sense to use a const-hashing algorithm on type_name to produce StableTypeIds
This would provide useful shorthand in "function systems" when there are a lot of components. Something like:
#[derive(Bundle)]
struct MyBundle {
a: ComponentA,
b: ComponentB
}
fn some_system(component_group: Bundle<MyBundle>) {
println!("a: {:?}", component_group.a);
println!("b: {:?}", component_group.b);
}
Ideally it could be used alongside normal queries like (&MyComponent, Bundle<MyBundle>)
Currently the depth buffer fights with depth from other passes. Additionally, this enables things like rendering ui to a texture
The RenderGraph should already be able to handle headless rendering scenarios. The only missing pieces are:
Pros:
Cons:
Legion's Query type is pretty nasty to construct manually / isn't suitable for Bevy's system functions. It was clearly designed to be constructed using a builder pattern. To solve this, Bevy defines its own SimpleQuery type, which then gets exported as Query
. This is great, but it currently doesnt support query filters, which are a pretty big feature/selling point of Legion. SimpleQuery
should have some way to define filters, ideally in a way that doesnt use legion's filter type directly. This will likely require some shenanigans.
Bevy should be able to play audio files. This should probably build on top of the RustAudio work:
https://github.com/RustAudio/rodio
https://github.com/RustAudio/cpal
The "specialized pipeline" terminology lines up with the PipelineSpecialization
type and more accurately reflects whats happening.
Unlike the old legion api, bevy_ecs systems now no longer have direct world (or subworld) access. Instead the plan is to give system queries safe direct access to the components/archetypes in the query.
fn system(mut query: Query<(&mut A, &B )>) {
// iteration
for (a, b) in &mut query.iter() {
// do stuff here
}
// direct component access
let component = query.get::<A>(some_entity).unwrap();
}
Currently direct component access works by just letting queries access their internal World
reference directly:
query.get::<Component>(entity)
is equivalent to world.get::<Component>(entity)
.
This is technically safe because hecs doesnt allow parallel access to components (it would panic if two queries tried to get write access to the same component in parallel), but its harder to reason about and could cause programs that appear to work to intermittently fail when invalid accesses occur. Queries should have a layer of checks when accessing an entity's components directly to ensure the query has permission to the given entity's component. This allows potentially invalid component accesses to fail fast (and consistently). It would mean that if you want to access an entity's component directly from a query, that component needs to match the query's archetype / mutability exactly.
When combined with a proper parallel scheduler, this gets us most of the way there to acceptable / consistent safety levels, because two systems with conflicting sets of queries would not be scheduled to run at the same time. The last "gap" would then be two queries within the same system potentially accessing the same components.
for example:
fn system(mut query_1: Query<(&mut A, &B )>, mut query_2: Query<&mut A>) {
for (a, b) in &mut query_1.iter() {
for a_might_collide in &mut query_2.iter() { /* might panic here */ }
}
}
There are a few solutions to this problem:
I personally prefer (2). I really like not needing to pass in and split subworlds. I think the planned Query api is much more pleasant to use and I'm willing to allow run time failures in this case as long as we fail fast and consistently if you try to do something "wrong".
That being said, if anyone has other ideas or opinions on the right path forward, feel free to chime in.
When trying to render a shape::Quad
using a MeshEntity
with an OrthographicCameraEntity
nothing rendered.
This seems to be an issue with that camera that expects "2D data" but not a MeshEntity
.
Edit:
Changing the name of the camera to use a different uniform binding makes the Quad render.
.add_entity(OrthographicCameraEntity {
camera: Camera {
name: Some(bevy::render::base_render_graph::uniform::CAMERA.to_string()),
..Default::default()
},
..Default::default()
})
Currently RenderResource
trait impls are used to populate RenderResourceAssignments
with [BindingName, RenderResource]
pairs. Pipeline bind group descriptors are then passed to RenderResourceAssignments
to generate the final BindGroup
/ RenderResourceSet
.
This approach allows maximum flexibility: RenderResourceAssignments can be populated from any number of sources and those sources dont need to care about bind group layout.
The downside to this approach is that it is computationally heavy. Generating bind groups involves multiple loops / name hashes.
It makes sense to somehow allow developers to opt out of this if they dont need it. It should be possible to take a RenderResources
impl and turn it directly into a BindGroup
/ RenderResourceSet
without looking at string names at all.
proc_macros contribute a number of heavy dependencies to the build tree (proc_macro2, syn, quote). Eliminating them could significantly boost clean compiles.
One approach would be to remove proc_macros entirely or interact with the TokenTree directly. However using Watt would require minimal code changes: https://github.com/dtolnay/watt. It also offers a watt version of serde-derive, which would aid us in fully removing syn/quote from the tree.
The current Input<MouseButton>
interface is very nice but implies the existence of a single mouse input device.
In future maybe multiple mice or touch inputs want to be supported (touch having a "backwards compatible" interface with MouseButton
maybe?), for that a mechanism to select/retrieve the input device ID would be needed.
There will likely be cases where people want to compose uniform buffers from multiple components (ex: combine Translation, Rotation, Scale into one uniform buffer).
Of course this can currently be done manually, but if it becomes a common pattern it makes sense to automate it.
From what I can tell, they are incompatible with legion's new lifetimes. Additionally, they add a significant amount of cold compile time (over 20 seconds).
Its a shame because they are so ergonomic:
// foreach system from the ECS Guide
fn score_system(player: Com<Player>, mut score: ComMut<Score>) {
let scored_a_point = random::<bool>();
if scored_a_point {
score.value += 1;
println!(
"{} scored a point! Their score is: {}",
player.name, score.value
);
} else {
println!(
"{} did not score a point! Their score is: {}",
player.name, score.value
);
}
// this game isn't very fun is it :)
}
This is likely a system ordering issue.
The compute/render pass lifetime restrictions require the user to have resources alive while the pass is recorded. This introduces a problem when:
Vec
), and it's hard to express to Rust that your elements will not be removed from itA solution recently discovered is an Arena. When adding elements to it, it gives you bback the references with the same lifetime as the arena. So you can have Arc<Something>
(where Something
contains wgpu resources) that is put into the arena, which is cleared/initialized before the pass recording starts (and cleared after), i.e. this will work:
let mut arena = TypedArena::new();
let mut pass = encoder.begin_render_pass(...);
let something = Arc::new(Something::new(wgpu_device.create_something())));
let something_ref = arena.alloc(something);
pass.set_vertex_buffer(something_ref.slice(..)); // allowed now!
Hopefully, this can help refactoring the engine in a way that doesn't need to lock the world like it's currently done in wgpuResources.
These don't really need to be coupled and the logic shouldn't be re-implemented for each render backend.
2D depth is currently misbehaving. It starts clipping at < 0.5.
Bevy currently uses glam, which is blissfully simple, but also doesnt cover the breadth of types required (UVec2 / IVec2 / etc). Additionally, given that we are using a data driven approach for shaders, it makes sense that the math library can map to all (or at least most) shader types. Ideally all types can be zero-copy converted into their shader types.
Libs to consider:
nalgebra
euclid
pathfinder_geometry
Certain types of rendering are made much easier by an immediate mode api (canvases, text, etc).
I also think there is potential here to make the "immediate mode" api the foundation for the following:
Renderable
abstractionI like the idea of having a core set of "batchable" draw calls and building Drawable
abstractions on top. A breadth of engines use this approach and it works quite well.
Draw calls would be scoped to entities in some ImmediateMode
or DrawCall
component. Then during render graph execution a draw call list would be generated based on some draw order algorithm (ex: z-sort). State changes could then be reduced as much as possible according to the draw call ordering.
This would probably remove the need for the existing DrawTarget
abstraction. Instead, the current graph nodes would be fully responsible for producing an ordered list of draw calls from the input World.
I was hoping this wouldn't be necessary because I like the simplicity of the current process: "define data struct", "derive RenderResources", "register Asset". The ShaderDefs system also helps cover a huge portion of the "traditional material featureset". However there are common aspects of a material (such as a transparency flag), that cannot be encoded as a uniform or shader def. I have two ideas so far:
Camera entities should have a Scale component to control zooming.
Each piece of text is currently rendered to its own texture, which isn't very efficient. It would be good to use an atlas to render text.
FNV is much faster for small keys and we arent worried about DDOS in the rendering system, so it makes sense to use FNV where we can: https://github.com/servo/rust-fnv, especially considering that hashing is a bottleneck right now
Now that the scene system is in place, it makes sense to load as much as possible from gltf files ... not just meshes.
This appears to be because the Camera transform is incorrect on the first frame
Since we upgraded legion to master, legion panics when we enable cross-archetype parallelism in "system fns". The short term fix is to disable it:
But thats not ideal. This is either a legion bug or a bevy "system fn" bug, and we should fix it.
This would allow us to use legion's entity builders directly instead of defining our own builders
The only way to handle errors in systems right now is to result.unwrap()
them. This is both less ergonomic and panic-ey. Ideally systems could optionally return an anyhow::Result<()>
. It would also be great if developers could define their own error handlers. Maybe some devs want to print their errors whereas others want to panic.
I think there are two approaches we could adopt here:
into_system()
I think option (2) is the least destructive / most friendly to upstream, but it means that only system fns can handle errors.
If this is implemented, it also makes sense to modify bevy libs to return error types instead of Options for common operations to improve erognomics.
fn some_system(a: Res<A>, b: Res<B>, x: Com<X>, y: Com<Y>) -> Result<()> {
a.do_something_risky()?;
// system logic here
Ok(())
}
// inside into_system() impl for Fn(Res<A>, Com<X>) -> Result<()>
system_builder.with_resource::<ErrorHandler>()
// resources are (error_handler, a, b)
result = run_system(a, b, x, y);
error_handler.handle(result);
The biggest downside to implementing this I can think of is that this multiplies the number of system fn impls by 2 (which is already in the hundreds of impls). That would come at an estimated clean-compile time cost of 40 seconds on fast computers ... not ideal. The best way to mitigate that is to revert to non-flat system fn impls, which would then only require a single new impl:
Doing so would both remove the need for a macro for system fn impls and reduce clean compile times by 40 seconds (current) / 80 seconds (with Result impls). But its also not nearly as nice to look at / type
// pseudocode
impl IntoSystem for Fn(ResourceSet, View<'a>) -> Result<()> { /* impl here */ }
fn some_system((a, b): (Res<A>, Res<B>), (x, y): (Com<X>, Com<Y>)) -> Result<()> {
a.do_something_risky()?;
// system logic here
Ok(())
}
This means that any changes that result in a shader def removal will not get picked up.
It is both offset on the y-axis and extremely aliased.
I'm pretty sure this is because the skribo
crate defaults to DirectWrite. I see a number of possible solutions:
skribo
or font-kit
1
, which would fix the offset problem but not the aliasing problem.rusttype
or ab_glyph
. These would hopefully provide consistency and have the benefit of being pure rust.Now that the migration to hecs/bevy_ecs is complete, we no longer have parallel system execution. bevy_ecs does have a naive dependency-unaware parallel scheduler (just to prove that the interfaces work), but we need to now take into account read/write resource and component dependencies.
Right now the naming used for "gpu resource" concepts is confusing and sometimes conflicting:
RenderResourceId
s uniquely identifiable via a hash of the contained RenderResourceId
s. these eventually get turned into a wgpu::BindGroup
by the wgpu RenderResourceContext
RenderResourceId
s. It also contains named vertex buffer / index buffer assignments. Used alongside pipeline layouts to generate RenderResourceSets
. The RenderResourceSets
are cached and are only re-generated when a RenderResourceAssignment is changed.RenderResourceId
)render_resource::RenderResource
trait impls indexed by their string namesRenderResourceId
s.RenderResourceContext
render_resource::RenderResources
Some thoughts:
RenderResourceId
type, it could be broken up into BufferId
, TextureId
, SamplerId
, etc. This gives additional type safety by default. When they need to be conceptually grouped (ex: bindings), a wrapper type can be used.RenderResourceAssignments
to UniformAssignments
or just Uniforms
. If that happens it probably makes sense to move vertex buffer assignments to a different type. The same argument applies to render_resource:RenderResource
and render_resource::RenderResources
.render_resource_context::RenderResourceContext
/ render_resource_context::RenderResources
distinction is confusing. RenderResources is just a wrapper over the Box-ed RenderResourceContext. Maybe its better to just insert the Box<dyn RenderResourceContext>
directly into legion's resource collection. Accessing resources the resource by its type wouldnt be as clean, but it would remove another type that people need to worry about.In general the goal here should be to improve clarity and remove as much "invented jargon" as possible. Types should be self-describing.
The metal backend breaks when gl_VertexIndex is used. This will likely either need to be fixed upstream in wgpu or by removing gl_VertexIndex.
There are certain sets of nodes (ex: material nodes) that have the same edge requirements (ex: add a node edge to the main pass). By defining an autowire group and adding a node to that group, the amount of boilerplate required for adding that node type could go down.
self.add_system_node(node::SPRITE, RenderResourcesNode::<Sprite>::new(true));
self.add_node_edge(node::SPRITE, base_render_graph::node::MAIN_PASS);
self.add_system_node(node::SPRITE, RenderResourcesNode::<Sprite>::new(true));
self.add_node_to_group(node::SPRITE, "main_pass_resources");
The examples above also illustrate that this doesn't really win us anything for the single-edge case, which will probably be the most common.
Additionally, it would make extending the graph easier. Ex: what if someone adds a new pass node that needs the same dependencies as the main pass? With autowire groups, they could just add themselves as an autowire target to "main_pass_resources" and immediately get all main pass resources wired up to them.
Currently camera position and rotation must be handled by directly modifying the Transform component.
This should be reconciled with manually setting the Transform (ex: calling look_at()). We don't want position and rotation components overriding manually set transforms.
Bevy should be able to render text to a texture, which can then be used by UI nodes, sprites, 3D models, etc.
Opaque objects should be drawn front-to-back (to encourage early fragment discarding) and transparent objects should be drawn back-to-front (to ensure correct transparency results).
Currently draw order is undefined / dependent on pipeline order.
Currently consuming input events looks like this:
fn move_on_input(
world: &mut SubWorld,
mut state: ResMut<State>,
time: Res<Time>,
keyboard_input_events: Res<Events<KeyboardInput>>,
query: &mut Query<(Write<Translation>, Read<Handle<Mesh>>)>,
) {
let mut moving_left = false;
let mut moving_right = false;
for event in state.event_reader.iter(&keyboard_input_events) {
if let KeyboardInput {
virtual_key_code: Some(key_code),
state,
..
} = event
{
if *key_code == VirtualKeyCode::Left {
moving_left = state.is_pressed();
} else if *key_code == VirtualKeyCode::Right {
moving_right = state.is_pressed();
}
}
}
for (mut translation, _) in query.iter_mut(world) {
if moving_left {
translation.0 += math::vec3(1.0, 0.0, 0.0) * time.delta_seconds;
}
if moving_right {
translation.0 += math::vec3(-1.0, 0.0, 0.0) * time.delta_seconds;
}
}
}
It would be great if it looked like this instead.
fn move_on_input(
world: &mut SubWorld,
mut state: ResMut<State>,
time: Res<Time>,
input: Res<Input>,
query: &mut Query<(Write<Translation>, Read<Handle<Mesh>>)>,
) {
let mut moving_left = input.key_pressed(VirtualKeyCode::Left);
let mut moving_right = input.key_pressed(VirtualKeyCode::Right);
for (mut translation, _) in query.iter_mut(world) {
if moving_left {
translation.0 += math::vec3(1.0, 0.0, 0.0) * time.delta_seconds;
}
if moving_right {
translation.0 += math::vec3(-1.0, 0.0, 0.0) * time.delta_seconds;
}
}
}
The raw events are a good first step (and shouldnt be removed), but the common "was this input pressed before this update" case should be optimized.
maybe a 3d model from a different perspective? this would be a good test of the render graph.
Legion component change events / filters are fired whenever a system query contains Write<Component>
. This makes these events useless for logic that needs to run when components are actually changed / as an optimization.
This might be solvable with some custom RefMut logic:
struct RefMut<T> {
// other members
modified: bool,
}
impl Deref for RefMut<T>
// on deref, set modified to true
// inside IntoSystem
query.iter(world) {
(a.fire(subscribers), b.fire(subscribers))
}
Modifying an asset currently requires grabbing a mutable reference to Assets<T>
. This makes sense conceptually, but Legion's Scheduler (correctly) synchronizes when a mutable reference is grabbed. This means multiple systems requiring mut access to the same Assets<T>
storage cant run in parallel.
It is worth adding some interior mutability / RwLock
s to Assets<T>
to see if this improves performance for common use cases.
Transparency is currently per-entity (Draw::is_transparent), but this breaks down when an entity has multiple materials or draw calls. I think it makes sense to somehow plug this in to the Draw component.
Currently the Draw
component is a flat list of low level render commands. Maybe it could be grouped by draw call and paired with additional draw config?
The order of entity "siblings" is important for 2d rendering and scene management. Therefore it should be possible to explicitly define sibling order.
Currently child entities define their parent with a Parent(Entity)
component, which then gets added in arbitrary order to the parents Children
component. This approach fits the ECS model quite nicely and has the extremely valuable property of not needing to have access to the parent's Children
component at insertion time (which makes adding entities via CommandBuffers really nice and also makes parallelism easier).
One solution would be to add an optional "position hint" to the Parent
component, which allows a child to position itself relative to another child (ex: Parent::new(entity, Position::After(entity))
)
Another option would be to make the Children
component the source of truth
Currently bind groups are re-created each frame to prevent "garbage" from stacking up from removed resources. @kvark has advised me that this approach isn't optimal ๐
As a short term fix, he suggested just retaining them for X frames before dropping them. We could also make WgpuResources aware of BindGroup resource dependencies. The information is all there, it would just be a matter of storing a RenderResource->Vec<BindGroupId>
mapping, then cleaning up bind groups for removed resources.
Currently assets can reference other assets via handles, but the Bevy asset system isn't aware of these dependencies. This hasn't really been a problem until now, but as we start making Bevy more event driven (ex: only updating an asset's RenderResources when the asset changes), this starts creating problems.
For example, when a ColorMaterial asset's referenced texture changes (same handle, different value) a new texture is created on the gpu and the old one is cleaned up because of the Texture AssetEvents. However the ColorMaterial RenderResources aren't updated to use the new RenderResourceId because the AssetRenderResourcesNode hasn't received a new ColorMaterial AssetEvent.
I think the right solution is to allow assets to enumerate their dependencies and use those dependencies to fire a new AssetEvent::DependencyEvent(AssetEvent) event. As AssetEvents are currently generic (AssetEvent) this clearly wont work for arbitrary dependency types.
The alternative solution is to handle depedencies on a case-by-case basis, but its hard to make that work for a generic abstraction like RenderResourceNodes.
Bevy currently creates new samplers for each texture. This isn't efficient given that there is a small number of possible samplers. It probably makes sense to prevent duplicates.
Notably we should distinguish between adding new entities and extending the current entity ("add" vs "with")
Bevy's prelude is basically "anything that might get used by devs at some point". This isn't particularly helpful. It should be paired down to the bare essentials.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.