Giter VIP home page Giter VIP logo

rfcs's People

Contributors

annekitsune avatar dotellie avatar fhaynes avatar maroider avatar moxinilian avatar torkleyy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rfcs's Issues

[RFC] Pass list.

Here will be a list of passes that we want added to amethyst after hal is integrated.
Only the ones not currently existing in amethyst will be in this list.
In the comments, add a name and a description of the passes you would like to see added.

SHADER/PASS LIST

Name: Triplanar
Description: Makes the UV move according to the world coordinates of the object. If you move the object, it gives the impression of the texture staying in the world space, by sliding over the mesh. This is mostly use for 2D seamless level building, and 3D user-made constructibles. It allows to have seamless "corners". If you have a wall on the x axis, and one on the z axis, and they both intersect each other, when viewing the texture from the anywhere it should always look seamless.

Name: LightingDebug
Description: Shows light vectors and light levels.

[Rhuagh]
Name: Wireframe
Description: Only mesh lines are visible.

Name: PhysicsDebug
Description: Wireframe for collision shapes. Show collision event normals. Shows velocity, momentum and force vectors.

[RFC] Asset Management and Pipeline

I believe we can all agree that good tooling is essential for making users feel productive. Amethyst rests on a solid foundation of core tech but to really make a data-driven engine shine, solid editing and introspection tools are essential. I'd like to take a step closer to the Amethyst tooling vision and address the issue of assets, a common factor in all game editing tools.

If this seems like a good direction I'll be working on an RFC that will discuss how these tools may interact with assets once there is consensus on the problems to solve. This issue will initially contain some of my thoughts around problems and features I'd like to see in Amethyst with a suggested technical design coming in the RFC. Looking forward to your thoughts!

Background & Problem Statements

Asset lifecycle

Production-ready game engines generally have multi-stage asset pipelines. This means that an asset goes through multiple steps of processing and conversion before being loadable in the engine runtime. Usually there are three stages for an asset.

Input -> Edit -> Runtime

The input format is usually some form of common data interchange format like fbx, png, tga. The edit format is engine-specific and generally abstracts the input format as well as provides the possibility to add metadata to the asset. The runtime format is optimized for quick loading and can be adapted per platform or based on other build settings. There are multiple benefits to this separation.

  • By separating the specifics of an input format from the data it provides the engine becomes more extensible. PNG, TGA, JPG provide textures which are generally collections of two-dimensional color arrays. FBX, OBJ, GLTF provide 3D scene data. Support for more formats that provide similar data can be added more easily.
  • How assets are loaded at runtime can be configured during edit time which simplifies loading APIs significantly. Decisions like which compression format to use for a texture or whether mipmaps should be generated can be made with tools instead of cluttering game code.
  • Asset preparation passes such as mesh simplification or texture compression can be configured and performed at build time instead of during runtime.
  • Assets can be built with different configurations for different purposes. Textures can be compressed differently for phones or consoles to ensure a smaller artifact and shaders can be precompiled for specific platforms to save on startup time.
  • Custom processing steps can be implemented by users. This can be useful to automatically configure something per platform or fix up some quirk in a third-party exporting tool.

Build scalability through "pure functional asset pipelines"

It's nice when you don't have to wait for your computer. Even if you have 80GB of source data like some people. Frostbite may have spent a ton of time to make their build pipelines fast and Amethyst doesn't really need to do that yet, but the key take-away and the enabling feature of their fully parallel and cachable build pipeline is a deterministic mapping from source data to build artifact. This is what enables a bunch of caching tricks and studio-wide networked caching systems that can, combined with a few 40G switches, make your build times quite acceptable.

To clearly state the requirement, this means being able to deterministically hash a source asset and all variables that become an input to the build process and also have the build artifact be deterministic. This usually means hashing the asset's build config, target platform, compiler version, build code version, importer version, asset dependency hashes. Once you have calculated the hash, you can request the artifact off the network or a local cache.

NVMe m.2 drives are becoming cheaper and cheaper with multiple GBps in sequential read & write speeds. I'd be really glad if Amethyst could scale to the limitations of the hardware in its asset pipeline.

Concurrent Modifications

While I enjoy the Unix philosophy and admire the vision for Amethyst tools, there is a large difference between Unix command-line tools and game development tools. Game development tools are usually interactive and persistent in their display of information while Unix tools run once over a set of data, output a result and terminate. This difference results in one of the greatest challenges of computer science: cache invalidation!

Let's take a particle system editor as an example. Perhaps it edits an entity (prefab) asset. These assets are files on disk and presumably are not parsed and loaded each frame, thus there is a cached in-memory representation of the disk contents. If another tool, say a general component inspector of some kind, was to edit the same file concurrently there is a chance of inconsistent or lost data unless the tools exercised some form of cache coherence protocol.

Hot reload

Quick iteration times are key to staying competitive in the current game development market and hot reloading of as many asset types as possible is a large leap in the right direction. A running game should be able to pick up on asset changes from tooling over the network to enable hot reloading on non-development machines.

Search and query

Presumably many tools will want to search for specific files or attributes in files. This is useful when finding which assets reference a specific asset for example. Being able to find what you are looking for is amazing and if this can be provided as a common service to all tooling that'd presumably save a lot of time for both tool developers to avoid duplicated code and for users to find what they need. Attribute indexing would presumably require asset reflection of some sort.

Asset identifiers and Renaming or Moving

Users want to be able to rename or move assets without compromising their data and therefore references between assets cannot be based on paths, and preferably loading assets is not path-based either. Bitsquid's blog discusses this issue in detail.

The productivity gained from being able to describe your entire game as a graph where each edge is an asset reference and each node is an asset is incredible in many cases. It enables a better understanding of resources usage through visualization and to automatically optimize asset runtime layouts based on dependencies.

Persistent asset IDs can also enable "serialization of handles" where they are represented on disk as asset IDs but the in-memory representation is a handle that is materialized as the asset is loaded.

Asset Versioning or Version Upgrade

I'd argue that the #1 reason Linux has seen such success is the dedication Linus Torvalds has for maintaining compatibility between versions. When updating from one version of the Linux kernel to another, you never need to update any other applications and this is due to the strict policy of "no user space regressions".

It'd be nice if there was a way for Amethyst to ensure that assets created in older versions are still compatible when updating, or at least that there is an upgrade path. Otherwise Amethyst may end up with people staying on older versions and splitting the community at each major update. I'm not saying that this promise of not breaking people's projects needs to exist right now, but there should be a technical plan for how this can be handled in the future to ensure both a smooth upgrade process for users and preferably a low maintenance cost for the Amethyst developers.

An important note is that it's easier to automatically upgrade people's data than their code. As a data-driven engine that's probably something to embrace.

Scripting RFC: Custom layouts

This came up on the #scripting discord chat, but for many structs it should be possible to remove the requirement for #[repr(C)]. Since we're generating the struct definitions at runtime - at least, certainly for Lua - we can just have a struct that lets you specify offsets like so:

enum FieldsDef<'a> {
    Named(&'a [(&'a str, usize)]),
    Unnamed(&'a [usize]),
}

struct MyStruct {
    a: Foo,
    b: Bar,
}

impl GetFields for MyStruct {
    const FIELDS: FieldsDef<'static> = FieldsDef::Named(&[
        ("a", offset_of!(MyStruct, a)),
        ("b", offset_of!(MyStruct, b)),
    ]);
}

Then the backend can use this to generate a C-compatible struct definition on the fly, reordering fields and generating explicit padding bytes where necessary. This allows Rust to reorder fields and do whatever else but without sacrificing the ability to access those fields in the scripting layer.

This also allows us to support tuples and external types. For tuples, we can do something like so:

impl<A, B> GetFields for (A, B) {
    const FIELDS: FieldsDef<'static> = FieldsDef::Unnamed(&[
        (offset_of!(Self, 0)),
        (offset_of!(Self, 1)),
    ]);
}

Then for external structs we can do something like so:

pub struct MyStruct {
    pub a: Foo,
    pub b: Bar,
}

// ... in another crate ...

struct MyWrapper(MyStruct);

impl GetFields for MyWrapper {
    const FIELDS: FieldsDef<'static> = FieldsDef::Named(&[
        ("a", offset_of!(MyStruct, 0.a)),
        ("b", offset_of!(MyStruct, 0.b)),
    ]);
}

I've used an associated const in this, but it's worth noting that you could also support structs with runtime-defined layouts if you made it a fn. The only example I can think of is wasmtime's VMCtx, which isn't something that you'd be likely to pass to a script, but it's worth thinking about.

This could be very simply wrapped in a derive macro - if Rust supported macro_rules macros as annotations it'd even be simple enough to be implement using that.

Shutdown and cleanup missing behavior

Currently, when you cleanly quit the game, there is no clean-up actions that can be ran (disconnecting from a server, saving something to a file, etc...)

There should be a way that the engine call the code from the user when the game is shutting down.

There is a method in Application::shutdown which is unused and could be re-used for that purpose.

There are two places (or more?) where exit code could be placed: State and System.

State is annoying, because you will probably end up copy pasting the shutdown method between your states.

System on the other hand would be a bit better since you could have a shutdown() method added to them. However, usually you will want your systems to be stateless, so having a cleanup method isn't really idea here either.

An rfc would need to be opened for discussion by the person taking this issue in charge.
Thanks!

[RFC] System name constants

Hey, I had to write this out so that I don't forget it.


When a bundle adds a system to a dispatcher, it provides a name which is used in system dependency ordering. The system name is effectively API, as external systems that depend on that system specify it as a dependency.

While the number of systems is small, it isn't hard to maintain a few &'static strs. However when there are many systems, if I make a typo, or the name has changed, using &strs defers failure to application startup (runtime) instead of compile time. If we can use constants or a function that returns a derivable system name, it would decrease the maintenance cost for larger applications.

Things to consider:

  • Systems are generally not public outside of a crate, so we'd have to expose the name some other way.
  • What about Systems with generic parameters, such as AnimationControlSystem<I, T>, where the type parameters are defined by the consumer / application code

Possible useful crates:

Alternatives:

  • Do nothing — perhaps the maintenance cost isn't big enough for this effort to be undertaken.

[RFC] Amethyst UI

Ui RFC

Here we go...

Warning: Things might be out of order or otherwise hard to understand. Don't be afraid to jump between sections to get a better view.

So here, I will be describing the requirements, the concepts, the choices offered and the tradeoffs as well as the list of tasks.

Surprisingly enough, most games actually have more work in their ui than in the actual gameplay. Even for really small projects (anything that isn't a prototype), games will have a ui for the menu, in game information display, settings screen, etc...

Let's start by finding use cases, as they will give us an objective to reach.
I will be using huge games, to be sure to not miss out on anything.

Since the use cases are taken from actual games and I will be listing only the re-usable/common components in them, nobody can complain that the scope is too big. :)

Use cases: Images

Endless Space
Endless Space
Path of exiles
Wow
COD_BO4

Use cases: Text

So now, let's extract the use cases from those pictures.

  • 2d text
  • 3d ui (render to texture)
  • 3d positionned flat text
  • 3d positionned text with depth
  • 3d text can be visible through 3d elements or occluded by them (including partial occlusion)
  • 2d world text (billboard)
  • images
  • color box
  • color patterns (gradient)
  • color filter (change saturation, grayscale, alpha, etc) color filter
  • multiple color in same text vs multiple text segments aligned
  • locale support
  • display data
  • clicky buttons
  • clicky checkboxes
  • draggable elements (free positioning)
  • draggable elements (constrained) (can be used to scroll through scroll views, or to move sliders heads)
  • drag and drop (with constraints on what can be dropped where)
  • Scroll views
  • Sliders
  • tab view
  • menu bar
  • editable text
  • focusable elements
  • keyboard, mouse, controller, touchscreen, rc remote, wii remote inputs
  • on screen keyboard (for use with mouse or controller)
  • overlays (a.k.a help bubbles/popups)
  • automatic layouting
  • auto-resizable text
  • auto-resizable images/color boxes
  • circular progress bars circularpb circularpb2
  • adapts to different screensizes + auto resizes content
  • reactive (past a minimal size, content rearranges itself according to other layout rules)
  • transparency settings for all elements
  • glow effect
  • transitions/animation(fade in, fade out, movements, scaling, etc..)
  • draw lines (straight + curve)
  • non-rectangular ui elements or triggers
  • ui scaling, working in scroll views (path of exile picture)
  • draw a texture (including dynamic) to screen (also allows showing 3d objects on ui, but requires rendering in a different world) wowminimap
  • occlusion pattern (example: make a square image just show as a circle, removing the corners. also affecting the event triggers)
  • Progress bar (gradient/image + partial display + background) progressbar
  • Scrolling text (or view) scrollingtext scrollingview
  • Input fields (edit text + background + focus + keyboard handling)
  • Growable lists
  • Different presentation when selected
  • Different presentation depending on set condition (have enough money to buy this upgrade ? white : gray)
  • Can select multiple elements at once (lists)
  • Expandable views (bottom button closes the window) expviews
  • Graphs!
  • play sound when hovering or clicking
  • change texture when hovering or clicking, or animate, or apply effect, or trigger a custom side effect in the world (click a button -> spawn an "explosion" entity in the world, triggering a state Trans)
  • Links (opens browser)
  • Theming (changing the color of all links to red, changing the margin or padding for some elements, etc)
  • tables
  • a simple in-game console to enter commands that get sent to amethyst_terminal (when it exists)

Use cases: Conclusion

There are a lot of use cases and a lot of them are really complex. It would be easy to do like any other engine and just provide the basic elements, and let the game devs do their own custom elements. If you think about it however, if we are not able to provide those while we are the ones creating the engine, do you honestly expect game developers to be able to make them from the outside?

Also, if we can implement those using reusable components and systems, and make all of that data oriented, I think we will be able to cover 99.9% of all the use cases of the ui.

Big Categories

Let's create some categories to know which parts will need to be implemented and what can be done when.

I'll be listing some uses cases on each to act as a "description". The lists are non-exhaustives.

Eventing

  • User input
  • Selecting
  • Drag and drop
  • Event chaining and side effects

Layouting

  • Loading layout definitions
  • Resize elements
  • Ordering elements
  • Dynamic sizes (lists)
  • Min/Max/Preferred sizes

Rendering

  • Animation
  • Show text (2d, 3d billboard, 3d with rotation)
  • Gradients
  • Drawing renders (camera) on other textures (not specifically related to ui but required)

Partial Solutions / Implementation Details

Here is a list of design solution for some of the use cases. Some are pretty much ready, some require some thinking and others are just pieces of solutions that need more work.

Note: A lot are missing, so feel free to write on the discord server or reply on github with more designs. Contributions are greatly appreciated!

Here we go!

Drag

Add events to the UiEvent enum. The UiEvent enum already exists and is responsible of notifying the engine about what user-events (inputs) happened on which ui elements.

pub struct UiEvent {
    target: Entity,
    event_type: UiEventType,
}

enum UiEventType {
    Click, // Happens when ClickStop is triggered on the same element ClickStart was originally.
    ClickStart,
    ClickStop,
    ClickHold, // Only emitted after ClickStart, before ClickStop, and only when hovering.
    HoverStart,
    HoverStop,
    Hovering,
    Dragged{element_offset: Vec2}, // Element offset is the offset between ClickStart and the element's middle position.
    Dropped{dropped_on: Entity},
}

Only entities having the "Draggable" component can be dragged.

#[derive(Component)]
struct Draggable<I> {
    keep_original: bool, // When dragging an entity, the original entity can optionally be made invisible for the duration of the grab.
    clone_original: bool, // Don't remove the original when dragging. If you drop, it will create a cloned entity.
    constraint_x: Axis2Range, // Constrains how much on the x axis you can move the dragged entity.
    constraint_y: Axis2Range, // Constrains how much on the y axis you can move the dragged entity.
    ghost_alpha: f32,
    obj_type: I, // Used in conjunction with DropZone to limit which draggable can be dropped where.
}

Dragging an entity can cause a ghost entity to appear (semi transparent clone of the original entity moving with the mouse, using element_offset)
When hovering over draggable elements, your mouse optionally changes to a grab icon.
The Dragged ghost can have a DragGhost component to identify it.

#[derive(Component)]
struct DropZone<I> {
	accepted_types: Vec<I>, // The list of user-defined types that can be dropped here.
}

Event chains/re-triggers/re-emitters

The point of this is to generate either more events, or side effects from previously emitted events.

Here's an example of a event chain:

  • User clicks on the screen -> a device event is emitted from winit
  • The ui system catches that event and checks if any interactable ui element was located there. It finds one and emits a UiEvent for that entity with event_type: Click
  • The EventRetriggerSystem catches that event (as well as State::handle_event and custom user-defined systems!), and checks if there was a EventRetrigger Component on that entity. It does find one. This particular EventRetrigger was configured to create a Trans event that gets added into the TransQueue
  • The main execution loop of Amethyst catches that Trans event and applies the changes to the StateMachine. (PR currently opened for this.)

This can basically be re-used for everything that makes more sense to be event-driven instead of data-driven (user-input, network Future calls, etc).

The implementation for this is still unfinished. Here's a gist of what I had in mind:

Note: You can have multiple EventRetrigger components on your entity, provided they have unique In, Out types.

// The component
pub trait EventRetrigger: Component {
    type In;
    type Out;
    fn apply(func: Fn(I) -> Vec<O>);
}

// The system
// You need one per EventRetrigger types you are using.
pub struct EventRetriggerSystem<T: EventRetrigger>;
impl<'a, T> System<'a> for EventRetriggerSystem<T> {
    type SystemData = (
        Read<'a, EventChannel<T::In>>,
        Write<'a, EventChannel<T::Out>>,
        ReadStorage<'a, T>,
    );
    fn run...
    read the events, run "func", write the events 
}

Edit text

Currently, the edit text behaviour is

  1. Hardcoded in the pass.
  2. Partially duplicated in another file.

All the event handling, the rendering and the selection have dedicated code only for the text.

The plan here is to decompose all of this into various re-usable parts.
The edit text could either be composed of multiple sub-entities (one per letter), or just be one single text entity with extra components.

Depending on the choice made, there are different paths we can take for the event handling.

The selection should be managed by a SelectionSystem, which would be the same for all ui elements (tab moves to the next element, shift-tab moves back, UiEventType::ClickStart on an element selects it, etc...)

The rendering should also be divided into multiple parts.
There is:

  • The text
  • The vertical cursor or the horizontal bar at the bottom (insert mode)
  • The selected text overlay

Each of those should be managed by a specific system.
For example, the CursorSystem should move a child entity of the editable text according to the current position.
The blinking of the cursor would happen by using a Blinking component with a rate: f32 field in conjunction with a BlinkSystem that would be adding and removing a HiddenComponent over time.

Selection

I already wrote quite a bit on selection in previous sections, and I didn't fully think about all the ways you can select something, so I will skip the algorithm here and just show the data.

#[derive(Component)]
struct Selectable<G: PartialEq> {
	order: i32,
    multi_select_group: Option<G>, // If this is Some, you can select multiple entities at once with the same select group.
    auto_multi_select: bool, // Disables the need to use shift or control when multi selecting. Useful when clicking multiple choices in a list of options.
}

#[derive(Component)]
struct Selected;

Element re-use

A lot of what is currently in amethyst_ui looks a lot like other components that are already defined.

UiTransform::local + global positions should be decomposed to use Transform+GlobalTransform instead and
GlobalTransform should have its matrix4 decomposed into translation, rotation, scale, cached_matrix.

UiTranform::id should go in Named

UiTransform::width + height should go into a Dimension component (or other name), if they are deemed necessary.

UiTransform::tab_order should go into the Selectable component.

UiTransform::scale_mode should go into whatever component is used with the new layouting logic.

UiTransform::opaque should probably be implicitly indicated by the Interactable component.

I'm also trying to think of a way of having the ui elements be sprites and use the DrawSprite pass.

Defining complex/composed ui elements

Once we are able to define recursive prefabs with child overrides, we will be able to define the most complex elements (the entire scene) as a composition of simpler elements.

Let's take a button for example.
It is composed of: A background image and a foreground text.
It is possible to interact with it in multiple ways: Selecting (tab key, or mouse), clicking, holding, hovering, etc.

Here is an example of what the base prefab could look like for a button:

// Background image
(
    transform: (
        y: -75.,
        width: 1000.,
        height: 75.,
        tab_order: 1,
        anchor: Middle,
    ),
    named: "button_background"
    background: (
        image: Data(Rgba((0.09, 0.02, 0.25, 1.0), (channel: Srgb))),
    ),
    selectable: (order: 1),
    interactable: (),
),
// Foreground text
(
    transform: (
        width: 1000.,
        height: 75.,
        tab_order: 1,
        anchor: Middle,
        stretch: XY(x_margin: 0., y_margin: 0.),
        opaque: false, // Let the events go through to the background.
    ),
    named: "button_text",
    text: (
        text: "pass",
        font: File("assets/base/font/arial.ttf", Ttf, ()),
        font_size: 45.,
        color: (0.2, 0.2, 1.0, 1.0),
        align: Middle,
        password: true,
    )
    parent: 0, // Points to first entity in list
),

And its usage:

// My custom button
(
    subprefab: (
        load_from: (
            // path: "", // you can load from path
            predefined: ButtonPrefab, // or from pre-defined prefabs
        ),
        overrides: [
            // Overrides of sub entity 0, a.k.a background
            (
                named: "my_background_name",
            ),
            // Overrides of sub entity 1
            (
                text: (
                    text: "Hi!",
                    // ... pretend I copy pasted the remaining of the prefab, or that we can actually override on a field level
                ),
            ),
        ],
    ),
),
                

Ui Editor

Since we have such a focus on being data-oriented and data-driven, it only makes sense to have the ui be the same way. As such, making a ui editor is as simple as making the prefab editor, with a bit of extra work on the front-end.

The bulk of the work will be making the prefab editor. I'm not sure how this will be done yet.
A temporary solution was proposed by @randomPoison until a clean design is found: Run a dummy game with the prefab types getting serialized and sent to the editor, edit the data in the editor and export that into json.
Basically, we create json templates that we fill in using a pretty interface.

Long-Term Requirements

  • Draw text on sprites
  • Draw sprites on 3d textures
  • Asset caching
  • Good eventing system (in progress)
  • Recursive prefabs

Crate Separation

A lot of things we make here could be re-usable for other rust projects.
It could be a good idea to make some crates for everyone to use.

One for the layouting, this is quite obvious.
Probably one describing the different ui event and elements from a data standpoint (with a dependency to specs).
And then the one in amethyst_ui to integrate the other two and make it compatible with the prefabs.

Remaining Questions

  • Multiple colors in same text component VS multiple text with layout so they look like a single string
  • Display data: Data binding? User defined system? impl SyncToText: Component?
  • Which layout algorithm will we use? Should it be externally defined? If so, how to define default components?
  • How to define occlusion patterns (pictures with alpha?). How to do the render for those?
  • How to make circular filing animations?
  • Theming?
  • How to integrate the locales with the text?
  • Make implementation designs for everything that wasn't covered yet

If you are not good with code, you can still help with the design of the api and the data layouts.
If you are good with code, you can implement said designs into the engine.

As a rule of thumb for the designs, try to make the Systems the smallest possible, and the components as re-usable as possible, while staying self contained (and small).

Imgur Image Collection

Tags explanation:

  • Diff Hard: The different parts aren't all hard, but the whole thing is complex.
  • Priority Important: Some things aren't super important and are there to improve the visual, others are important improvements to the api that we can't go around.
  • Status Ready: Some parts are ready to be implemented (at least as prototypes), mostly the design section.
  • Project Ui: This is ui
  • RFC Discussing: Discussions and new designs will go on for a long long long time I'm afraid. This is the biggest RFC of amethyst I think.

RFC: Std I/O driven application (aka `amethyst_commands`)

Issue 999, this is gonna be epic!

Summary

Ability to control an Amethyst application using commands issued through stdin, with human-friendly terminal interaction.

Motivation

Inspecting and manipulating the state1 of an application at run time is a crucial part of development, with at least the following use cases:

  • Determining that the application is behaving as expected.
  • Experimenting with new features.
  • Triggering certain cases.
  • Investigating / troubleshooting unexpected behaviour.
  • Automatically driving the application for integration tests.

A command terminal will greatly reduce the effort to carry out the aforementioned tasks.

1 state here means the runtime values, not amethyst::State

Prior Art

Expand -- copied from #995 (warning: code heavy)

okay, so this post is code heavy, but it's how I've done commands in my game (youtube). It shouldn't force people to use the state machine, since event types are "plug in if you need it".

Crate: stdio_view (probably analogous to amethyst_commands)

  • Reads stdin strings, uses shell_words to parse into separate tokens.
  • Parses the first token into an AppEventVariant to determine which AppEvent the tokens correspond to. On success, it sends a tuple: (AppEventVariant, Vec<String>) (the tokens) to an EventChannel<(AppEventVariant, Vec<String>)>.

Changes if put into Amethyst:

  • StdinSystem would be generic over top level types E and EVariant, which would take in AppEvent and AppEventVariant.

Crate: application_event

  • Contains AppEvent and AppEventVariant.

  • AppEvent is an enum over all custom event types, AppEventVariant is derived from AppEvent, without the fields.

    Example:

    use character_selection_model::CharacterSelectionEvent;
    use map_selection_model::MapSelectionEvent;
    
    #[derive(Clone, Debug, Display, EnumDiscriminants, From, PartialEq)]
    #[strum_discriminants(
        name(AppEventVariant),
        derive(Display, EnumIter, EnumString),
        strum(serialize_all = "snake_case")
    )]
    pub enum AppEvent {
        /// `character_selection` events.
        CharacterSelection(CharacterSelectionEvent),
        /// `map_selection` events.
        MapSelection(MapSelectionEvent),
    }

This would be an application specific crate, so it wouldn't go into Amethyst. If I want to have State event control, this will include an additional variant State(StateEvent) from use amethyst_state::StateEvent;, where StateEvent carries the information of what to do (e.g. Pop or Switch).

Crate: stdio_spi

  • StdinMapper is a trait with the following associated types:

    use structopt::StructOpt;
    
    use Result;
    
    /// Maps tokens from stdin to a state specific event.
    pub trait StdinMapper {
        /// Resource needed by the mapper to construct the state specific event.
        ///
        /// Ideally we can have this be the `SystemData` of an ECS system. However, we cannot add
        /// a `Resources: for<'res> SystemData<'res>` trait bound as generic associated types (GATs)
        /// are not yet implemented. See:
        ///
        /// * <https://users.rust-lang.org/t/17444>
        /// * <https://github.com/rust-lang/rust/issues/44265>
        type Resource;
        /// State specific event type that this maps tokens to.
        type Event: Send + Sync + 'static;
        /// Data structure representing the arguments.
        type Args: StructOpt;
        /// Returns the state specific event constructed from stdin tokens.
        ///
        /// # Parameters
        ///
        /// * `tokens`: Tokens received from stdin.
        fn map(resource: &Self::Resource, args: Self::Args) -> Result<Self::Event>;
    }

    Args is a T: StructOpt which we can convert the String tokens from before we pass it to the map function. Resource is there because the constructed AppEvent can contain fields that are constructed based on an ECS resource.

  • This crate also provides a generic MapperSystem that reads from EventChannel<(AppEventVariant, Vec<String>)> from the stdio_view crate. If the variant matches the AppEventVariant this system is responsible for, it passes all of the tokens to a T: StdinMapper that understands how to turn them into an AppEvent, given the Resource.

    /// Type to fetch the application event channel.
    type MapperSystemData<'s, SysData> = (
        Read<'s, EventChannel<VariantAndTokens>>,
        Write<'s, EventChannel<AppEvent>>,
        SysData,
    );
    
    impl<'s, M> System<'s> for MapperSystem<M>
    where
        M: StdinMapper + TypeName,
        M::Resource: Default + Send + Sync + 'static,
        AppEvent: From<M::Event>,
    {
        type SystemData = MapperSystemData<'s, Read<'s, M::Resource>>;
    
        fn run(&mut self, (variant_channel, mut app_event_channel, resources): Self::SystemData) {
        // ...
        let args = M::Args::from_iter_safe(tokens.iter())?;
        M::map(&resources, args)
        // ... collect each event
    
        app_event_channel.drain_vec_write(&mut events);
    }
    }

Crate: character_selection_stdio (or any other crate that supports stdin -> AppEvent)

  • Implements the stdio_spi.

  • The Args type:

    #[derive(Clone, Debug, PartialEq, StructOpt)]
    pub enum MapSelectionEventArgs {
        /// Select event.
        #[structopt(name = "select")]
        Select {
            /// Slug of the map or random, e.g. "default/eruption", "random".
            #[structopt(short = "s", long = "selection")]
            selection: String,
        },
    }
  • The StdinMapper type:

    impl StdinMapper for MapSelectionEventStdinMapper {
        type Resource = MapAssets;         // Read resource from the `World`, I take a `MapHandle` from it
        type Event = MapSelectionEvent;    // Event to map to
        type Args = MapSelectionEventArgs; // Strong typed arguments, rather than the String tokens
    
        fn map(map_assets: &MapAssets, args: Self::Args) -> Result<Self::Event> {
            match args {
                MapSelectionEventArgs::Select { selection } => {
                    Self::map_select_event(map_assets, &selection)
                }
            }
        }
    }
  • The bundle, which adds a MapperSystem<MapSelectionEventStdinMapper>:

    builder.add(
        MapperSystem::<MapSelectionEventStdinMapper>::new(AppEventVariant::MapSelection),
        &MapperSystem::<MapSelectionEventStdinMapper>::type_name(),
        &[],
    );

Can use it as inspiration to drive the design, or I'm happy to push my code up for the reusable parts (stdio_spi should be usable as is, stdio_view probably needs a re-write).

Detailed Design

TODO: discuss

Alternatives

The amethyst-editor will let you do some of the above tasks (inspecting, manipulating entities and components). It doesn't cater for:

  • Server side inspection (e.g. SSH to an application running on a headless server)
  • Automated tests
  • Easy repeatability (source controlled actions)

[RFC] Prototyping speed improvements

Problem

Prototyping in amethyst is slow.

100 lines of code seems to be the minimum to make the smallest of game. While its not a lot, it definitely is more than necessary.

Source

I'm taking https://github.com/amethyst/amethyst/blob/develop/examples/sphere/main.rs as a model
Lack of good defaults for generic types.
Imports
Lack of handle_event utils for simple cases -> Quit on any key, quit on some key, is key pressed.

Propositions

Lack of good defaults for generic types.

Adding sensible defaults as type alias for generics.

  1. Add a default.rs file.
  2. Make a DefaultSomething type alias with a default consistent with the others.
  3. Export the module default.

Example:
amethyst_input/src/default.rs

type DefaultInputHandler = InputHandler<String,String>

Imports

Expand the prelude to include as many common types as possible. I'm not too sure how much this slows down compile time when using the prelude, but I do have performance degradation in one of my project where I was using only preludes.

Lack of handle_event utils for simple cases -> Quit on any key, quit on some key, is key pressed.

I already implemented it inside of a custom project, currently debating if this should be inside of the engine.

[RFC] Tile map components and rendering

This RFC was inspired by recent discussions on Gitter.

A very large volume of games rely on some form of grid for level building, whether it be something like Mario, Minecraft, Starcraft, or Metroid. The need for this kind of grid is so ubiquitous that I believe there's a very strong argument for building an optimized purpose built infrastructure for it in Amethyst.

Using entities to represent each tile for this runs into problems of its own, as it causes component storages to very rapidly become poorly optimized for memory usage. Additionally, when using this approach in a 3D space we very quickly run into problems with just having more entities than permitted by the engine. So my proposal is to introduce a new component and rendering capabilities into the engine.

The new component might look something like this:

pub struct TileMap<I, A> {
    width: u32,
    height: u32,
    length: u32,
    tiles: Vec<Tile<I, A>>,
}

Where the Tile structure looks about like this:

pub struct Tile<I, A> {
    id: I,
    mesh: MeshHandle,
    material: Material,
    attributes: A,
}

The generic I is mostly intended for use with a user provided identification system, so that they can easily determine if something is, for example, a lava tile. A is intended for use to store arbitrary attribute information provided by the user. So now the TileMap can be attached to an Entity and then each Tile is drawn in 3D space at the position it occupies within the TileMap. This could also permit some more aggressive CPU side culling, where we only draw the tiles that are on the edges of the 3D TileMap.

This can also function as a 2D tilemap if you just set the height to 1.

Additionally once physics are in place it would likely be desirable to automatically generate the collision mesh that other entities can use to interact with this TileMap, probably by marking Tiles as "passable" or "impassable".

[RFC] New amethyst render

Fancy and shiny new amethyst render proposal.

This RFC made in attempt to systematize ideas and thoughts on new render I went on writing almost a yer ago.

What this RFC is about

I would try to describe how new amethyst render can look like.
Both from user perspective and implementation.
The aim of this RFC is to gather feedback on mentioned problems and proposed solutions.

What wrong with current render

Let's step back and look at current render. Why do we even want to replace it?
First thing that came to mind is singlethreadedness. My early attempt to make things run in parallel only made it worse (proven by @Xaeroxe, when he simplified it to run in one thread performance increased). Come to think of it make it obvious, OpenGL has singlethreaded heart. Commands we encode in parallel become serialized at flush time.

The second pain point is singlethreadedness. Yes. Again. It hurts this much. We can't even create and upload resources (images and buffers) in parallel. This means we can't do it in our Systems. Current workaround is to defer resource initialization to be complete by render. Loading code is overcomplicated because of this. Also makes it impossible to generate data for GPU each frame outside render (think mesh generation from voxels).

Significant overhead. Current render works on pre-ll gfx that supports only OpenGL right now. Each layer adds an overhead.
New APIs provide opportunities for optimizing in way more places and reduces problem with CPU-bottleneck. Yet pre-ll gfx doesn't support newer APIs and even if will it gives user same freedom as OpenGL where user can't optimize based on usage too much.
OpenGL users utilize arcane techniques to squeeze as much performance as possible. If we start doing so we may end up with unmaintainable pile of hacks buried into endless pit of sorrow.

Solution

We need to write new render that will be based on modern graphics APIs like Vulkan, DirectX 12, Metal.

But which to choose? We can't choose one without sacrificing platform support.
We can't manually support each of them either.
Gladly it already taken care of.

gfx-hal

gfx-hal is not an evolution of pre-ll gfx. It's a brand new thing. gfx-hal's API is based on Vulkan API with added rustyness but with minimum overhead.

gfx-hal should open the path to support following platforms

Sadly gfx-hal is not even close to become stable.

ash

Another alternative is ash. With Vulkan/Metal bridge like MoltenVK or gfx-portability we would support:

ash requires more boilerplate and careful implementation. It is essentially raw Vulkan API for rust.
Which means it is pretty stable.

Supporting multiple backends in our higher-level render.

It can be done. It is even simpler to do in higher-level code. But I don't think it is a feasible option.

High-level render design outline

Modules

While amethyst will use the render as a whole it doesn't mean render code must be written as a huge code blob. It may be helpful to design render as collection of modules each of which solves one problem at a time.

What problems higher-level render should solve you may ask.
Let's describe few:

Memory management.

Modern APIs have complex memory management story with lots of properties, rules and requirements for the application.
Higher-level render should give the user straightforward API for create/destroy resources and transfer data between resources and host.

Work scheduling.

Vulkan have 3 types of objects in API for scheduling work to the device. Namely vkQueue, vkCommandPool and vkCommandBuffer.

vkQueue inherit capabilities from its family and user is responsible to not try to schedule unsupported commands. Higher-level render should check that (at compile time where possible).

vkCommandPool is simple as an axe. No fancy wrapper required except tracking queue family it belongs to.

vkCommandBuffer have implicit state that changes subset of functions that can be used with it. Higher-level render should prevent both wrong usage and unnoticed state change.
To prevent implicit transition to the Invalid state there must be facility to hold resources referenced in recorded commands from being destroyed.

Pipelines

Manually describing and switching graphics and compute pipelines is hard and error-prone.
Higher-level render should support pipelines described in declarative manner and automate their binding.

Synchronization

New graphics APIs such as Vulkan require explicit synchronization between commands when they depend on each other or use same resource.
This topic is really complex. Rules are sophisticated. Errors could be hidden until release.
Framegraph approach allow automatic synchronization between nodes of the graph.
gfx-chain library does this kind of automatic scheduling between queues and deriving synchronization required. It should be reworked to remove gfx-hal dependency from which only few structures used anyway.
Because of upfront knowledge for the resource usage it is possible to greatly optimize memory usage by aliasing transient resource that is never exists together.

Descriptors

Handling descriptors is non-trivial work and should be simplified by higher-level render.
But can be done later as the only one who will work with them are render-pass writers.
Suboptimal usage of descriptors are very simple and should be OK until becoming a bottleneck.

Higher-level primitives

While graphics API consume resources, pipelines and lists of encoded commands to to their job the user shouldn't be faced with such low-level concepts unless he tries to render something non-trivial.
Well defined common use cases could be bundled and provided out of the box.

  • Meshes that groups vertex buffers under some attribute layout
  • Textures for both static, dynamic images and RTT
  • Materials as collection of textures
  • Sprites with animation frames in single texture
  • Terrains generated from high-map
  • Integrated UI of our choice.

would be a good start.

What there already is

At this point I have memory manager ready-to-test and prototype of command buffer/queue safety wrappers. There is also

TODO: Add shiny diagrams and fancy snippets.

Terrains (as in Ground Mesh)

Should terrains be made in an external tool?
Should it be limited to a single plane per xz position?

How do we manage texture merging between different heights?

Planned features:
MAIN

  • Mesh generation
  • Get height(s) for xz position
  • Mesh collider
  • Texturing
  • Texture merging with different heights (splatting) + modes / heightmap
  • Noise generators support (external crates)

SECONDARY

  • Foliage support
  • Cave support
  • Procedural infinite generation algorithm support
  • Voxel vs Plane
  • Chunk based for infinite generated
  • 3D plane & voxel, 2D voxel
  • Mesh optimisation
  • LOD Support

This description will be modified after discussion.

Loader/Asset API Ergonomics

The current loader API is (subjectively) a little heavy.

For example, to load a GLTF file:

let loader = world.read_resource();
let progress = ();
let format = GltfSceneFormat;
let options = Default::default();
let storage = &world.read_resource();

let asset = loader.load("path/to/gltf.gltf", GltfSceneFormat, options, progress, storage);

And to load a GLTF file from a custom source:

let loader = world.read_resource();
let progress = ();
let format = GltfSceneFormat;
let options = Default::default();
let source = /*...*/;
let storage = &world.read_resource();

let asset = loader.load_from("path/to/gltf.gltf", GltfSceneFormat, options, source, progress, storage);

I think this API could be made slightly cleaner by doing (any subset of) a few things:

  • Dynamically dispatch on the resource URL to determine the source and format (maybe integrating with the vnodes system)
  • Put the storages within the Loader / give the Loader some way to get handles to the storages, so that the user doesn't have to
  • Use the builder pattern

Then, a simple asset load can look like just:

let asset = loader.asset("/io/assets/path/to/texture.png").load();

And a complex load can look like:

let asset = loader
    .asset("/io/assets/special_source/path_to_scene.gltf")
    .progress(progress_counter)
    .options(GltfSceneOptions { /* ... */ })
    .custom_storage(custom_storage) // not sure if this would be needed
    .load();

Since loading assets is something that you do a lot, I think it's worth it to make the API nice to use.

Downsides:

  • Makes things slightly more prone to runtime errors (can have file-type mismatches)
  • Gives Loader more responsibilities

[RFC Discussion] Legion ECS Evolution

Legion ECS Evolution

Following lengthy discussion on both Discord and the Amethyst Forum (most of which, including chat logs, can be found here), we propose with this RFC to move Amethyst from SPECS to Legion, an ECS framework building on concepts in SPECS Parallel ECS, as well as lessons learned since. This proposal stems from an improved foundational flexibility in the approach of Legion which would be untenable to affect on the current SPECS crate without forcing all users of SPECS to essentially adapt to a rewrite centered on the needs of Amethyst. The flexibility in Legion is filled with tradeoffs, generally showing benefits in performance and runtime flexibility, while generally trading off some of the ergonomics of the SPECS interface. While the benefits and the impetus for seeking them is described in the "Motivations" section, the implictions of tradeoffs following those benefits will be outlined in greater detail within the "Tradeoffs" section.

There are some core parts of Amethyst which may either need to considerably change when moving to Legion, or would otherwise just benefit from substantial changes to embrace the flexibility of Legion. Notably, systems in Legion are FnMut closures, and all systems require usage of SystemDesc to construct the closure and its associated Query structure. The dispatch technique in Legion is necessarily very different from SPECS, and the parts of the engine dealing with dispatch may also be modified in terms of Legion's dispatcher. Furthermore, the platform of Legion provides ample opportunity to improve our Transform system, with improved change detection tools at our disposal. These changes as we understand them are described below in the "Refactoring" section.

The evaluation of this large transition requires undertaking a progressive port of Amethyst to Legion with a temporary synchronization shim between SPECS and Legion. This effort exists here, utilizing the Legion fork here. Currently, this progressive fork has fully transitioned the Amethyst Renderer, one of the largest and most involved parts of the engine ECS-wise, and is capable of running that demo we're all familiar with:

image alt text

Not only can you take a peek at what actual code transitioned directly to Legion looks like in this fork, but the refactoring work in that fork can be utilized given this RFC is accepted while actively helping to better inform where there may be shortcomings or surprises in the present.

Motivations

The forum thread outlines the deficiencies we are facing with specs in detail. This table below is a high level summary of the problems we are having with specs, and how legion solves each one.

Specs Legion
Typed storage nature prevents proper FFI All underlying legion storage is based on TypeId lookups for resources and components
hibitsetcrate has allocation/reallocation overhead, branch misses Archetypes eliminate the need of entity ID collections being used for iteration
Sparse storages causes cache incoherence Legion guarantees allocation of simliar entities into contigious, aligned chunks with all their components in linear memory
Storage fetching inherently causes many branch mispredictions See previous
Storage methodology inherently makes FlaggedStorage not thread safe. Queries in legion store filter and change state, allowing for extremely granular change detection on a Archetype, Chunk and Entity level.
Component mutation flagging limited to any mutable access Legion dispatches on a Archetype basis instead of component, allowing to parallel execute across the same component data, but just different entities. *A special case exists for sparse read/write of components where this isnt the case
Parallelization limited to component-level, no granular accesses See previous information about Archetypes
Many elided and explicit lifetimes throughout make code less ergonomic System API designed to hide the majority of these lifetimes in safety and ergonomic wrappers
ParJoin has mutation limitations See previous statements about system dispatcher and Archetypes

Immediate Benefits in a Nutshell

  • Significant performance gains

  • Scripting RFC can move forward

  • Queries open up many new optimizations for change detection such as culling, the transform system, etc.

  • More granular parallelization than we already have achieved

  • Resolves the dispatcher Order of Insertion design flaws

  • ???

Tradeoffs

These are some things I have ran into that were cumbersome changes or thoughts while porting. This is by no means comprehensive. Some of these items may not make sense until you understand legion and/or read the rest of this RFC.

  • Systems are moved to a closure, but ergonomics are given for still maintaining state, mainly in the use of FnMut[ref] for the closure, and an alternative build_disposable [ref].

  • All systems are built with closures, causing some initialization design changes in regards to reference borrowing

  • The SystemDesc/System types have been removed.

  • a Trait type cannot be used for System declaration, due to the typed nature of Queries in legion. It is far more feasible and ergonomic to use a closures for type deduction. The except to this case is thread-local execution, which can still be typed for ease of use.

Refactoring

This port of amethyst from legion -> specs has aimed to keep to some of the consistencies of specs and what Amethyst users would already be familiar with. Much of the implementation of Legion and the amethyst-specific components was heavily inspired/copied from the current standing implementations.

SystemBundle, System and Dispatcher refactor

This portion of the port will have the most significant impact on users, as this is where their day-to-day coding exists. The following is an example of the same system, in both specs and legion.

High Level Changes

  • Systems are all now FnMut closures. This allows for easier declaration and type deduction. They can capture variables from their builder for state. Additional ‘disposable’ build types are available for more complex stateful modes.

  • System data declarations are now all within a builder, and not on a trait.

  • Component data is now accessed via "queries" instead of “component storages”

  • Component addition/removal is now deferred, in line with entity creation/removal

  • Default resource allocation is removed, all world resource access now return Option<Ref>

  • System registration explicit dependecies are removed, now execution is ordered based on "Stages", which can be explicit priorities, but all system execution is flattened into a single data-dependent execution.

Following is an example of a basic system in both specs and legion

Specs
impl<'a> System<'a> for OrbitSystem {
    type SystemData = (
        Read<'a, Time>,
        ReadStorage<'a, Orbit>,
        WriteStorage<'a, Transform>,
        Write<'a, DebugLines>,
    );

    fn run(&mut self, (time, orbits, mut transforms, mut debug): Self::SystemData) {
        for (orbit, transform) in (&orbits, &mut transforms).join() {
            let angle = time.absolute_time_seconds() as f32 * orbit.time_scale;
            let cross = orbit.axis.cross(&Vector3::z()).normalize() * orbit.radius;
            let rot = UnitQuaternion::from_axis_angle(&orbit.axis, angle);
            let final_pos = (rot * cross) + orbit.center;

            debug.draw_line(
                orbit.center.into(),
                final_pos.into(),
                Srgba::new(0.0, 0.5, 1.0, 1.0),

            );
            transform.set_translation(final_pos);
        }
    }
}
Legion
fn build_orbit_system(
    world: &mut amethyst::core::legion::world::World,
) -> Box<dyn amethyst::core::legion::schedule::Schedulable> {
    SystemBuilder::<()>::new("OrbitSystem")
        .with_query(<(Write<Transform>, Read<Orbit>)>::query())
        .read_resource::<Time>()
        .write_resource::<DebugLines>()
        .build(move |commands, world, (time, debug), query| {
            query
                .iter_entities()
                .for_each(|(entity, (mut transform, orbit))| {
                    let angle = time.absolute_time_seconds() as f32 * orbit.time_scale;
                    let cross = orbit.axis.cross(&Vector3::z()).normalize() * orbit.radius;
                    let rot = UnitQuaternion::from_axis_angle(&orbit.axis, angle);

                    let final_pos = (rot * cross) + orbit.center;

                    debug.draw_line(
                        orbit.center.into(),
                        final_pos.into(),
                        Srgba::new(0.0, 0.5, 1.0, 1.0),
                    );
                    transform.set_translation(final_pos);
                });
        })
}

Example bundle Changes

RenderBundle - Specs

impl<'a, 'b, B: Backend> SystemBundle<'a, 'b> for RenderingBundle<B> {
    fn build(
        mut self,
        world: &mut World,
        builder: &mut DispatcherBuilder<'a, 'b>,
    ) -> Result<(), Error> {
        builder.add(MeshProcessorSystem::<B>::default(), "mesh_processor", &[]);
        builder.add(
            TextureProcessorSystem::<B>::default(),
            "texture_processor",
            &[],
        );

        builder.add(Processor::<Material>::new(), "material_processor", &[]);
        builder.add(
            Processor::<SpriteSheet>::new(),
            "sprite_sheet_processor",
            &[],
        );

        // make sure that all renderer-specific systems run after game code
        builder.add_barrier();
        for plugin in &mut self.plugins {
            plugin.on_build(world, builder)?;
        }
        builder.add_thread_local(RenderingSystem::<B, _>::new(self.into_graph_creator()));
        Ok(())
    }
}

RenderBundle - Legion

impl<'a, 'b, B: Backend> SystemBundle for RenderingBundle<B> {
    fn build(mut self, world: &mut World, builder: &mut DispatcherBuilder) -> Result<(), Error> {
        builder.add_system(Stage::Begin, build_mesh_processor::<B>);
        builder.add_system(Stage::Begin, build_texture_processor::<B>);
        builder.add_system(Stage::Begin, build_asset_processor::<Material>);
        builder.add_system(Stage::Begin, build_asset_processor::<SpriteSheet>);

        for mut plugin in &mut self.plugins {
            plugin.on_build(world, builder)?;

        }

        let config: rendy::factory::Config = Default::default();
        let (factory, families): (Factory<B>, _) = rendy::factory::init(config).unwrap();
        let queue_id = QueueId {
            family: families.family_by_index(0).id(),
            index: 0,
        };

        world.resources.insert(factory);
        world.resources.insert(queue_id);

        let mat = crate::legion::system::create_default_mat::<B>(&world.resources);
        world.resources.insert(crate::mtl::MaterialDefaults(mat));

        builder.add_thread_local(move |world| {
            build_rendering_system(world, self.into_graph_creator(), families)
        });

        Ok(())
    }
}

Parallelization of mutable queries

One of the major benefits of legion is its granularity with queries. Specs is not capable of performing a parralel join of Transform currently, because FlaggedStorage is not thread safe. Additionally, a mutable join such as above automatically flags all Transform components as mutated, meaning any readers will get N(entities) events.

In legion, however, we get this short syntax: query.par_for_each(|(entity, (mut transform, orbit))| { Under the hood, this code actually accomplishes more than what ParJoin may in specs. This method threads on a per-chunk basis on legion, meaning similiar data is being linearly iterated, and all components of those entities are in cache.

Transform Refactor (legion_transform)

Legion transform implementation

@AThilenius has taken on the task of refactoring the core Transform system. This system had some faults of its own, which were also exacerbated by specs. The system itself is heavily tied in with how specs operates, so a rewrite of the transform system was already in the cards for this migration.

Hierarchy

This refactor is aimed towards following the Unity design, where the source-of-truth for the hierarchy (hot data) is stored in Parent components (ie. a child has a parent). This has the added benefit of ensuring only tree structures can be formed at the API level. Along with the Parent component, the transform system will create/update a Children component on each parent entity. This is necessary for efficient root->leaf iteration of trees, which is a needed operation for many systems but it should be noted that the Children component is only guaranteed valid after the transform systems have run and before any hierarchy edits have been made. Several other methods of storing the hierarchy were considered and prototyped, including an implicit linked-list, detailed here. Given all the tradeoffs and technical complexity of various methods (and because a very large game engine company has come to the same conclusion) the current method was chosen. More info can be found in the readme of legion_transform.

Transform

The original Amethyst transform was problematic for several reasons, largely because it was organically grown:

  • The local_to_world matrix was stored in the same component as the affine-transform values.

    • This also implies that use of the original transform component was tightly coupled with the hierarchy it belongs to (namely it’s parent chain).
  • The component was a full Affine transform (for some reason split between an Isometry and a non-uniform scale stored as a Vector3).

  • Much of the nalgebra API for 3D space transform creation/manipulation was replicated with little benefit.

Given the drawbacks of the original transform, it was decided to start from a clean slate, again taking inspiration from the new Unity ECS. User defined space transforms come in the form of the following components:

  • Translation (Vector3 XYZ translation)

  • Rotation (UnityQuaternion rotation)

  • Scale (single f32, used for uniform scaling, ie. where scale x == y == z)

  • NonUniformScale (a Vector3 for non-uniform scaling, which should be avoided when possible)

Any valid combinatoric of these components can also be added (although Scale and NonUniformScale are mutually exclusive). For example, if your entity only needs to translate, you need only pay the cost of storing and computing the translation and can skip the cost of storing and computing (into the final homogenius matrix4x4) the Rotation and Scale.

The final homogeneous matrix is stored in the LocalToWorld component, which described the space transform from entity->world space regardless of hierarchy membership. In the event that an entity is a member of a Hierarchy, an additional LocalToParent components (also a homogeneous matrix4x4) will be computed first and used for the final LocalToWorld update. This has the benefits of:

  • The LocalToWorld matrix will always exist and be updated for any entity with a space transform (ex any entity that should be rendered) regardless of hierarchy membership.

  • Any entity that is static (or is part of a static hierarchy) can have it’s LocalToWorld matrix pre-baked and the other transform components need not be stored.

  • No other system that doesn’t explicitly care about the hierarchy needs to know anything about it (ex rendering needs only the LocalToWorld) component.

Dispatcher refactor

The Dispatcher has been rewritten to utilize the built-in legion StageExecutor, while layering amethyst needs on top of it.

The builder and registration process still looks fairly similar; the main difference being naming is now debug only, and explicit dependencies have been removed in favor of inferred insertion order via Stages, and then full parallel execution. ThreadLocal execution still also exists, to execute the end of any given frame in the local game thread. This means that a Stage or RelativeStage can be used to infer the insertion order of the system, but it will still execute based on its data dependencies, and not strictly its place "in line".

Migration Story

World Synchronization Middleware

Because of the fundemental changes inherent in this migration, significant effort has gone into making transitioning, and using both "old" and “new” systems as seamless as possible. This does come with significant performance cost, but should allow people to utilize the a mix of specs and legion while testing migration.

  1. The engine provides a "LegionSyncer" trait; this is dispatched to configure and handle syncing of resources and components between specs and legion

  2. Underneath these LegionSyncer traits, lies a set of automatic syncer implementations for the common use cases. This includes resource and component synchronization between the two worlds

  3. Dispatching already exists for both worlds; dispatch occurs as:

    1. Run specs

    2. Sync world Specs -> Legion

    3. Run Legion

    4. Sync world Legion -> Specs

  4. Helper functions have been added to the GameDataBuilder and DispatcherBuilder to streamline this process.

In the current design, syncers are not enabled by default and must be explicitly selected by the user via the game data builder. For example:

.migration_resource_sync::<Scene>()
.migration_resource_sync::<RenderMode>()
.migration_component_sync::<Orbit>()
.migration_sync_bundle(amethyst::core::legion::Syncer::default())
.migration_sync_bundle(amethyst::renderer::legion::Syncer::<DefaultBackend>::default())

The above will explicitly synchronize the specifed resources and components. Additionally, the "Sync bundles" are provided for synchronizing the features out of any given crate.

This synchronization use can be seen in the current examples, as they still utilize a large amount of unported specs systems.

Proposed Timeline

With the synchronization middleware available, it gives users the ability to slowly transition to the new systems while actively testing their project. I propose the following release timeline for this, allowing users to skip versions as we go and work between:

  1. The current implementation is feature gated behind "legion-ecs" feature. This can be released as a new version of Amethyst to begin migration

  2. The next release performs a "hard swap", from “specs default” to “Legion default”. This would rename all the migration_with_XXX functions to specs, and make legion default. This would also include ironing out the legion world defaulting in the dispatchers and builders.

  3. The next release removes specs entirely, leaving legion in its place.

Brain-fart of changes needed by users

  • Render Plugins/Passes

    • Change renderer::bundle::* imports to renderer::legion::bundle

    • Change renderer::submodules::* imports to renderer::legion::submodules

    • All resource access changes from world to world.resources

      • fetch/fetch_mut change to get/get_mut

        • get/get_mut return "Option", not default
      • Read and Write remove lifetime

      • ReadExpect/WriteExpect change to Read/Write

      • You can still use the same <(Data)>::fetch(&world.resources) syntax

    • Resources need to cache their own queries

      • THIS IS RECOMMENDED DONE IN A CLOSURE, TO PREVENT TYPING NIGHTMARE
    • amethyst/amethyst-imgui@06c1a58 a commit showing a port

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.