Giter VIP home page Giter VIP logo

webvr-spec's People

Contributors

cslroot avatar cvan avatar dmarcos avatar mkeblx avatar sneagan avatar toji avatar vvuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webvr-spec's Issues

Linking between VR web pages

I seem to recall some discussion of enabling linking directly from one VR web site to another without having to stop presenting. The spec as it is doesn't seem to handle that.

This is a potentially tricky case, so I understand wanting to hold off on implementation until more progress is made in the field. But has there been any discussion about what this might look like in an API? Are we confident that the API as it currently stands leaves room for that in the future?

Multiple PositionSensorVRDevices at once

Multiple position sensors controlling the observer seems to be a common situation.

Most commonly, you may be using the Rift's gyroscope for rotation, but also want to support mouse look and maybe arrow keys to look around.

Also, I'd like to build a 6DOF system for desktop where we use headtrackr for the positional DOFs, but also be able to mouse look. This case is easier since one of the devices only provides rotation, and the other provides only position.

Looks like the spec intends VRPositionState to report in absolute coordinates, which makes it hard to combine multiple states (eg. the rift case). Maybe switching to relative would be helpful for this? Otherwise how do we handle the multiple position sensor case?

ambiguity on number of VRLayers

While the plural in getLayers as well as the word layer itself indicate that more than one Layer can be presented, presentRequest and the wording of the explanation of VRLayer indicate that only a single Layer can be presented.

This ambiguity can lead to confusion and should be removed.

Although implementations currently only allow for a single Layer, the spec. may be more demanding and allow multiple Layers.

hasOrientation (& hasPosition) flags

What is the use case of these VRDisplayCapabilities flags on VRDisplay? Especially hasOrientation as hard to imagine HMD not having orientation tracking.

Also note specifically renamed from VRDevice to VRDisplay to focus spec on covering use case of HMDs (mainly) and not generic tracked VR devices.

Additionally can be figured out by null or not state of orientation and position data.

Standing Space without Room Scale

There's a rumor that the Oculus 1.0 SDK will include the ability to get coordinates relative to the floor. That's great news if true, but I don't think it will include chaperone-style room bounds. Even if not true this sort of capability is something we should plan for.

It's not clear how this sort of capability should map to our StageParameters. I suppose the easy way out is to simply set sizeX and sizeZ to 0, but there's also a difference in how it would interact with resetPose(). With a Vive the transform will always update the pose to be oriented to the room, but with a Rift resetPose() will still re-orient you.

I'd appreciate any thoughts on the matter!

State and events for interstitial/warning screens

Over at webvr-polyfill, there's been a request for events that would fire when the Cardboard interstitial appears and disappears. I think I read @toji say somewhere that the interstitial would be built into Chrome on Android. It occurs to me that this is analogous to the Oculus safety screen and that it would be useful to know when these screens are blocking the display so the user doesn't miss any content.

Does anybody know if the Oculus SDK allows detection of this sort of thing?

It seems that there should be a boolean property on the VRDisplay that tells whether the screen is blocked by a warning and an event that fires when this property changes. Maybe something analogous to document.hidden. Alternatively, onvrdisplaypresentchange could wait to fire until the interstitial is cleared, but that might not make sense if different platforms vary enough in their behavior. For example, if Chrome/Android/Cardboard (or the polyfill) shows an interstitial when you tilt the device too close to vertical, then the latter approach wouldn't work.

I could imagine platforms in the future popping up warnings for various safety and comfort reasons, like: an epilepsy blocker if it detects bright flashing light, or an interruption if the frame rate drops too low.

rename VRLayer

Layering requires multiple layers. However, currently only a single VRLayer can be presented by the display.
Therefore VRSource would be a better fit as a suggestion. There are probably better names to be discovered as well.

eyeTranslation.w must be one

I've noticed this issue with current implementations of webvr, they have eyeTranslation.w set to zero. There are two problems with this:

  1. If you transform eyeTranslation with a 4x4 matrix, it will not be translated by the matrix.
  2. If you toss eyeTranslation into a 4x4 matrix the w being zero will mean all points transformed by the matrix will have their w set to zero.

w=1 is meant for positional vectors, while w=0 is meant for direction vectors that you don't want to translate like surface normals, velocities, etc.

I would think this means that either eyeTranslation not have a w or require w to always be one.

The right way to enter VR mode

The current approach of going into VR mode via requestFullScreen({vrDisplay: hmd}) seems a little bit limiting. Why should full-screen be associated with VR? Does it still make sense for direct-to-rift mode, where only the rift display should be affected? How about for Cardboard-style uses?

What if we had something like hmd.startVR() instead?

/cc: @dmarcos

Define behavior when tab loses focus while presenting

What should happen when you switch to another tab or application while presenting in VR to an external display?

The spec says this:

In the event that the content process terminates unexpectedly, the browser will not exit VR mode. The VR compositor will destroy the content layer while continuing to present the trusted UI elements of the browser.

...which seems to indicate a desire not to abuse the user with a jarring experience. I think it's the right call.

It also says this:

The HMD pose and other VR inputs are only updated for the focused WebVR page. This can be implemented in the same manner as keyboard and mouse input.

The former seems to imply that there will be some pose-tracked 3D browser UI elements, so maybe the solution is to display that in place of the formerly active tab. So would you just resume presenting the web page as soon as it regains focus? What if you're switching between two tabs that are both VR enabled?

Any ideas?

Need a way to determine whether rendering is falling below target frame rate

I've been thinking about the possibility of adapting render quality to achieve the optimal frame rate, as described in this article. The article mentions things like adjusting the pre-distortion render buffer scale, adjusting anti-aliasing detail and some more complicated ideas. I was also thinking about adjusting the far end of z-clipping (with some fog) as it's appropriate for content, levels of detail, etc.

In order to do this, we need to know how well we're doing at meeting the target frame rate. The simplest way I can think of doing this is to report the frame rate of the VRDisplay device. Presumably, this would be fixed for now, but I suppose not necessarily forever. I suppose there are some arguments to be made against providing this information, but since there are annoying but feasible ways to figure it out, we may as well provide it easily and correctly.

One way that this falls short, however, is that if you are rendering at the optimal frame rate, there isn't really a good way to know if you've got a little extra time so you can bump the quality up or run some other tasks. You'd have to try raising the quality and then waiting to see if that drops the frame rate, which is certainly not ideal. If anybody has any ideas about that, I'd love to hear them.

cc: @toji, per our chat in webvr slack

sittingToStandingTransform may be a misnomer?

A bit of feedback I've received is that the concepts of "sitting space" and "standing space" that we've borrowed from OpenVR may not be accurate representations of the values we're providing. For example: It's definitely possible to stand in place while using an Oculus Rift, and you probably want to when using controllers like Touch. Since there's no sense of where you are in relation to the room, this would still be reported in "sitting space", though. Similarly, you may want to sit down with a Vive but still have the scene oriented to your room using "standing space." The actual values reported right now in either case should be fine, but the verbiage is weird.

I propose that we run with the verbiage of "stage" that we've already defined. So sittingToStandingTransform becomes stageTransform. We'd also have to come up with a term for the default space ("relative" comes to mind but may be too vague) and change some verbiage in the spec, but otherwise everything continues to work the way it does now.

Or we could just decide that this is a silly thing to worry about and move on. :)

Proposing we drop VRPose.frameID

I brought up some questions about VRPose.frameID earlier in the spec development, but I don't feel they were addressed due to time constraints. Today I received an email from developers at Samsung that effectively brought up the same points. Specifically:

  • It's not clear what this value should be if you fetch multiple poses from getImmediatePose()
  • I'm not sure how developers should use this value, and if it's something that the browser needs internally it probably shouldn't be exposed to Javascript.
  • If a user wants to create a synthetic Pose to pass into submitFrame I don't know what they'd be expected to put in the frameID.
  • The timestamp field is more valuable for determining things like latency.

If this field has proven useful for the MozVR team I'd love to hear more about it, but lacking further information it feels unnecessary and I think it should be dropped.

Document `allowvr` attribute for iframes

We want to add a new attribute allowvr that can be set on <iframe> (ร  la allowfullscreen).

This attribute can be set to true if the frame is allowed to access VRDisplay objects. When false, navigator.getVRDisplays() and navigator.activeVRDisplays will resolve and return empty sequences.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.