Giter VIP home page Giter VIP logo

dungeons-and-drawing's People

Contributors

dacaspex avatar dodecahedra avatar exioncore avatar extremelyd1 avatar valepr avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

dungeons-and-drawing's Issues

FontTexture seems to overlap

Problem
When generating certain pieces of text, some random pixels appear.
image

Expected cause
I think the texture coordinates are a bit off or the characters are actually this close to each other in the FontTexture

Add player collision detection

At the moment the player does not know that the map exists. It can move freely in any direction.

The player should of course not be able to walk through solid tiles.

My approach

  • I would pass the map to the player in the constructor.
  • When the player tries to move, we first check whether this move is possible. We need to make sure we get the correct tile for this since the player coordinates are floats and each tile has whole number coordinates.
    • Though I think a simple floor should do the trick

Orientation of models loaded with .PLY files

I am experimenting a bit with the Github system. And while this is a problem that could be useful to solve I do realize it is not our highest priority right now and that it is quite an easy fix I could easily do myself!

Problem
There seems to be a difference between the vertex positions in Blender and the vertex coordinates we have. When creating a model in Blender, everything is as expected. However, when then importing the same model in our engine, the model is rotated -90 degrees around the x-axis (this is now simply fixed by rotating the model by 90 degrees)

Expected cause:
Blender probably interprets the coordinates differently then we do. I think the axes are different. To be more precise, in the default camera position and orientation, I think in Blender the Z-axis is the "vertical" axis and the Y-axis is the "depth" axis while for us this is the other way around (see picture below).

1 2

Validation tool for pathfinding component

We need a validation tool for the A*-search algorithm, to show that it works correctly outside of the game. The validation tool will be a swing gui where the start and target of the A*-search can be selected in a grid, and obstacles can be made. The gui will then show how the algorithm found it's target.

Validator should display:

  • Opened Tiles
  • Closed Tiles
  • The parent-relation between tiles
  • The shortest path

Animation system

I was trying to come up with a way to design animations in a simple way. Repeating animations can of course be done by using (for example) the sine function, but it would also be cool to have a keyframe like system where you define a pair with a timestamp and a value and that the engine interpolates the value in between keyframes. For example:

Keyframe 1: At second 0, scale = 100;
Keyframe 2: At second 10, scale = 0;
The engine then makes sure that at second 2, the scale is 90 and at second 3, the scale is 80, etc.

This could for example be used to fade in and fade out GUI graphics, but could also be used to define a walkcycle for the character.

Material isColored property is not correct?

When I define all properties inside the material, the isColored boolean is set to true. In any other case, the boolean is left at false. My question would be whether this behaviour is intended and how I can easily create Materials for our PLY models.

Customizable shaders

Current situation
Right now we have one shader that is used by Renderer to render all our graphics. This shader is partially customized by supplying the shader with two integers that are being used as booleans (i.e. either have a value of 1 or 0). These integers are used to differentiate between texture or no texture and one solid color or vertex colors.

Problem
With the extension of our game, we would like to add GUI elements (buttons, debug information, text, pop-ups, etc). Most of these have a 2D appearance. That is, they might be 3D objects (for example the axis for the debug window), but we probably do not want them to be directly part of the scene. So the camera or the lights should have less influence or even no influence at all on these assets.
Another thing we want to add is the morphing shader. This as well would require some refactoring.

Options
I think it is important to early on decide on how we are solving this problem. I foresee some options:

  1. Extend the current shader such that we use the integer we supply it with more efficiently to differentiate between different cases. We can use structs and functions to structure the shader such that it stays as structured and clear as possible, but it will probably turn into a mess quickly haha if we try fit all our code in just one vertex and one fragment shader.
  2. We could supply the Renderer with different types of objects (scene, player / monster, GUI) and render them consequently. This thus influences the depth test I think, so we would have to think about how to approach it such that that poses no problem. It is however a more modular design in my opinion.
  3. We could give each entity a render method in which it defines its own shaders.
  4. We could have the GUI be part of the scene and limit camera movement when the gui is active.

Proposal
I think we should combine 1 and 2. I would keep everything that is part of the 3D scene together, so I would preferably not render objects that use the morphing shader separately and implement this as an extra customizable option inside the current shader. Otherwise I feel like it will be a challenge to make the objects part of the scene (i.e. that they are not drawn on top of the scene and are not influenced by shadows or the surrounding when it comes to lighting). Rendering the 2D GUI after the 3D scene however seems like a good idea as this also simplifies where we have to position our GUI, etc. (we simply draw over the 3D scene we just rendered.

Make a Light 'container' object that contains all lights

Right now we have to pass each light array separately to the render method. This makes the method very large and ugly. Perhaps a better solution would be to make an object which stores all the lights, which we pass to the render method. This reduces the number of parameters and makes it cleaner.

Extend the Camera class with other types of cameras

To enable debuggin and playing camera, I think it would be best to create other types of cameras, which all extend the basic Camera class. One camera could be for a free-flying mode, just like we have now (in the early part of the game), where we can control the camera ourselves. Another type of camera could be one that follows the player.

Create the first level

At this point we can start implementing an actual level. We need to get a few things done before we can fix this issue:

  • Get some basic tile models.
    • At this point this can just be coloured blocks. Although some nicely modelled models could help visualise how the level would look like.
  • Get a player model.
    • Does not have to be animated yet, heck, it can even be a cylinder.
  • Build a level using the map editor.
  • Setup the level with a new Level class.
  • Spawn the player in this level class as well.
  • Setup the FollowCamera to follow the player.
  • Collision detection for the player (#48)
  • Setup lights
    • A directional light for some nice effect
    • Ambient light to light up the whole scene
    • If shadows are fixed for point / spot lights (#12) then those can be added too.

GameEngine update has no notion of delta time

The GameEngine loop method calculates when to call the update method based on a TARGET_UPS, which denotes the target updates per second. Since, the update method gets this parameter instead of a delta time, a player object, for example, will walk slower based on how fast the game runs.

When passing the delta time, that is the time since the last update, game objects can update independently from the speed of the game.

GameEngine loop renders continuously

The GameEngine loop renders continuously without letting the Thread sleep, which causes the program to hoard all of the system's resources to render. We want to have the engine render at a certain FPS that can be capped in order to prevent this.

See LWJGL 3 Tutorial

Map loading from file

Implementing loading a map from file and defining a file format to use in the level editor

Level initialisation

The level initialisation should not happen in the constructor, since the level class is already created at startup. Instead, all initialisation code should happen in the init() method.

At the moment, when switching to another level, we do not call the init method of that level. This should thus happen.

Map file format

We need to agree on a map file format in which we can define what a map looks like. A hard requirement is a definition of which tile goes where. This format needs to be loaded into a map loader (and possibly exported by a map builder).

Suggested features

  • Where should tiles go x y tile_type
  • Where are the lights x y z light_type light_color
  • Where does the player initially start x y orientation
  • Level name??

Some more considerations, each level will contain logic of which puzzle has which solution and when do enemies become 'active'. Also, all the post-puzzle actions need to be defined somewhere. I think it is best to do this in a separate Level class, such that each new level has it's own class with all this logic.

The file format is simply a neat way of loading a level without hardcoding every single tile.

Possibility to resize the window

As of right now, the window cannot be resized. It has a fixed size. It would be preferable to be able to resize the window to fit your own needs.

Possible issues

When the window is resized, the player might gain an 'advantage' since he/she can see more of the map. I don't think this is a big issue in our case since you won't get that much of an advantage I think.

If we can work this out in a separate branch first, we can all see how this would change the code (and prevent merge conflicts)

Image distortion on updating window size

Problem
When resizing the window such that it has another ratio than the original 16:9, the image is distorted.
capture

Expected cause
One of the matrices important for rendering is probably not updated. I suspect the view matrix, projection matrix or both. The code in the GameWindow class has now been properly adjusted, but we still have to adjust some code in at least the Render class I think.

Lighting error

What is the problem
Lighting does not always work. Some places do not seem to be affected by the lights. I expect this to be a computational error.

Reproduce the problem
Open the Engine and make the point light orange. The tree is partially lit as expected. Now make the light a blue color and the green part of the tree model does not seem affected by the light.

Implement a color palette

Maybe it is nice to have a static class with all the colors we use in the GUI, so we can quickly experiment with different styles. Right now, the default colors are defined as static variables at the top of the GUI classes.

Rotation vector is in degrees, not a directional vector

Right now, the camera rotation vector only contains the x and z rotation. But it would be nice to have the directional vector available. So we need some kind of computation to calculate the directional vector.

Example
If the vector rotated 45 degrees around the z axis, then the rotational vector would be (1, 1, 0) (not normalised). The vector points in the direction.

With this, we can easily add up position and rotation to get a new location.

Normal coordinates different orientation

In the PLYLoader class, we switch around the y and z coordinates in order to change the orientation of the coordinates we use in comparison to the one blender uses.

We use the y axis for up, while Blender uses the z axis for up.
However, we do not account for this change in loading the normal coordinates. These are still in the Blender orientation.

Full screen doesn't work

When I try to load the game in full screen mode (fullScreen = true), then the window is still only a small portion of the whole screen.

Jagged edges when drawing

Problem
When drawing on the DrawingCanvas, we draw lines in between consecutive positions of the mouse. However, with certain patterns (sharp turns) this results in very jagged edges.

Possible solution
We might be able to solve it by involving the style of the stroke. I think playing around with this setting (from the Nano API) might help:

// Sets how sharp path corners are drawn.
// Can be one of NVG_MITER (default), NVG_ROUND, NVG_BEVEL.
void nvgLineJoin(NVGcontext* ctx, int join);

Gui component for only rendering text, without background

This can be used to show texts like
"Press E to interact"

(in the tutorial level)
"Use w,a,s,d to move"

It's meant as a non-intrusive way of providing information to the player, i.e. it does not pause the game.

This text can then also be used in the debug menu.

Main menu level

Now that we can have levels in the game, we can start building a 'main menu'. This menu should simply be a nice title screen with the title of the game and a way of starting the first level.

Features of the main menu level

  • It should contain a button/something to start the game.
  • It should contain a nice logo/title which says "Dungeons & Drawings"
  • Possibly a nice background? Perhaps a tiny rendered level?

Roadmap

The first thing to do is to figure out how we are going to design this menu. First we should come up with some concept art/ideas. Then, we we all agree, we can build the actual menu level.

Drawing interface

The drawing interface is an important component of the game. It should support all the puzzles we want to create, therefore a good discussion is needed to write down all the requirements for this component.

Idea

The drawing interface should be 2D where the user has a clear space to draw an object. This space should not be too large since the user needs to use all the available space (for better network results).

Since the user can find "blueprints" in the game which acts like hints, we need some kind of interface to show these hints. But because we haven't yet fully decided how to implement this, we cannot decide how to show this to the user at this point.

The network outputs a map of all classes and a percentage of how likely it thinks the drawn image is that class. This needs to be interpreted in some way and shown back to the user. Based on this outcome, the game should be notified to do "something", like an action (e.g. a wall falling down, blocks appearing, monsters spawning, etc.)

Considerations

  • Does the game fully pause?
  • The post actions system should be generic enough to handle all kinds of scenarios.
  • The window has to be designed to fit in the style of our game

TileMap and Tile class

What is the problem

There needs to be a system of tiles and a map. This system needs to be versatile enough to support all the operations we need for this game. This issue will describe these requirements and (hopefully) try to document all specifications.

Concept

Map
The map object provides an easy interface of querying the map for tiles. The player might want to see whether the tile in front of him is blocked or not, in order to see if the player can walk there

Player.update() {
    if (!map.getTile(position.translate(movement)).isBlocked() { // do movement }
}

The map will supply the render class with all the tiles and a method to convert the x, y coordinates int he map array to actual world coordinates.

Actually, this is done very simply since our tiles will have a width of 1 unit. So their position in the array are the actual world coordinates.

The map will have a load method to load the method. A MapLoader is a simple interface so that we can easily create implementations for this.

Tile
A simple tile consists out of

  • Mesh
  • Rotation vector (usually (0, 0, 0)
  • scale (usually 1)

This defines the most basic tile there can be. This tile can be used to create "fancier" tiles that contain more operations.

Background of text is not transparant

The text component can render text to the screen, but the background of the text image is not transparant. There should be an option to set the background colour I think.

"Key"bindings for mouse operations

We should change some of the interaction with the mouse to also make use of the KeyBinding system. In the 2D GUI especially, this is not yet the case

Proposal implement GameWindow as singleton

Right now, the GameWindow object is being passed around through almost every class. I think that this introduces a lot of dependencies between interfaces/methods which can (and I think should) be avoided.

Proposal

The GameWindow object is already a singleton, but it is not used as such. I propose to implement it this way and remove (almost) all GameWindow parameters.

Input handler class which defines all key bindings

Current situation

Right now we simply hardcode all the keys which are used in the game in the update methods. This will become quite messy if we have more keys bound to actions.

Proposal

Have one KeyBinding / Input class which provides an easy interface for key bindings. It would be the translation between keys to actions. For example, in a update method of a level:

KeyBinding binding = new DefaultKeyBinding();

if (binding.isForwardPressed()) { // update player position }
if (binding.isToggleDebugMenuPressed()) { // toggle debug menu }

This way, we only have to define our key bindings once. If we want some special keybindings in certain cases, we can simply override the DefaultKeyBInding with SomeSpecialKeyBindings.

The KeyBinding class is then responsible for reading out the LWJGL events for input.

Thoughts?

Player walking animation

We need to implement animation in our game. We want to do this for the player (and possibly monster) walking. Therefore, we should have a system that can do this.

Possible ideas

  • Use a Blender skeleton (?)
  • Export different parts and combine them in the player class, which animates the parts separately.

Requirements

  • Have a player model
  • The walking somewhat corresponds to the actual movement
  • When the player stops, the legs go back to the beginning position.

Button activated 'on pressed' or 'on released'

Should a GUI Button activate if the mouse button is pressed or when the button is released? This probably depends on what sort of buttons we want to use. If a Button always removes itself when it activates (and thus will not activate again), our current situation is fine. If this is not the case, we should adjust the code to only activate the button on releasing the mouse button

Create orientation axis

Proposal

I think it might be a good idea to have some kind of visual indication what the positive and negative x-, and z-axis are in the game. When I move the camera around, I can become disoriented quickly. I think some kind of helper tool for debugging purposes could help us

Idea

The idea would be to create a symbol that is always aligned with the axis, just like this:
image

This can then either be placed at the mouse position, or the center of the screen in front of all objects, or even in like the lower left corner? Or even perhaps like the Minecraft f3 menu.

Create pathfinding algorithm(s)

Mobs need pathfinding to find their path to the player. We should therefore create such algorithms, like
A*.

Each pathfinding class can implement a Pathfinding interface.

Convert the drawing data in OpenGL to input for the network

Problem
The next step in solving #9 is to convert the data in OpenGL to a format that can be used by the Neural Network. We store the data in OpenGL as a list of a list of points: List<List<Float>>. When during drawing, the mouse is released and pressed again, a new sublist is created to store all the points of this next part of the drawing. For each point, two floats are added to the list. One for the x-coordinate of the mouse and one for the y-coordinate. We need to convert this data or the rendered screen to a format that can be used by the NN

Possible solutions

  1. We can use OpenGL as some sort of render engine to export the frame as an image. This image we can then convert to suit our needs. There is a little discussion I found about it here: https://stackoverflow.com/questions/12157646/how-to-render-offscreen-on-opengl
  2. Use the list of lists of floats in a smart way to redraw the drawing in a Swing window which provides methods to easily export it to a BufferedImage.

Create level structure

There should be some mechanism to define/load levels. The best approach would be a level between the GameEngine and IGameLogic that handles the levels.

Implement the trained neural network

We need some kind of class that handles the trained neural network. Some tasks that need to be implemented:

  • Load the model from the zip file. Note that this takes a couple of seconds, therefore it should be done at the game startup.
  • Create a evaluate() method which gets an image and returns a list of strings with the probabilities.
  • Create methods to transform the input image, coming from our game, to a NDArray so it can be used in the network.
  • Perhaps, optionally, create filter functions to reduce the output list from the evaluate() method to only hold the probabilities of the items requested. This is useful when checking a puzzle.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.