Giter VIP home page Giter VIP logo

webxr's Introduction

WebXR Device API Specification

Build Status

The WebXR device API is for accessing virtual reality (VR) and augmented reality (AR) devices, including sensors and head-mounted displays on the Web.

Headset Devices Handheld Device e.g. Phone
VR VR Devices, previously handled by WebVR Magic Window Behaviour
AR Mixed Reality Headsets Phone AR

The WebXR Device API Specification is the repository of the Immersive Web Working Group.

Taking Part

  1. Read the code of conduct
  2. See if your issue is being discussed in the issues, or if your idea is being discussed in the proposals repo.
  3. We will be publishing the minutes from the bi-weekly calls.
  4. You can also join the working group to participate in these discussions.

Specifications

See also list of all specifications with detailed status in Working Group and Community Group.

Related Specifications

Legacy

  • Legacy WebVR API Specification: Legacy WebVR API 1.1 specification for JavaScript API for accessing VR displays. Development of the WebVR API has halted in favor of being replaced the WebXR Device API. Several browsers will continue to support this version of the API in the meantime.
  • Legacy Gamepad Extensions API Specification: Extends the Gamepad API to enable access to more advanced device capabilities.

Relevant Links

Communication

Maintainers

To generate the spec document (index.html) from the index.bs Bikeshed document:

make

Tests

For normative changes, a corresponding web-platform-tests PR is highly appreciated. Typically, both PRs will be merged at the same time. Note that a test change that contradicts the spec should not be merged before the corresponding spec change. If testing is not practical, please explain why and if appropriate file a web-platform-tests issue to follow up later. Add the type:untestable or type:missing-coverage label as appropriate.

License

Per the LICENSE.md file:

All documents in this Repository are licensed by contributors under the W3C Software and Document License.

webxr's People

Contributors

adarosecannon avatar alcooper91 avatar anssiko avatar autokagami avatar bialpio avatar cabanier avatar cslroot avatar cvan avatar d504 avatar ddorwin avatar dlibby- avatar dmarcos avatar dontcallmedom avatar frastlin avatar himorin avatar johnpallett avatar jungkees avatar kearwood avatar klausw avatar manishearth avatar mcmoodoo avatar nellwaliczek avatar svillar avatar takahirox avatar thetuvix avatar toji avatar tomrittervg avatar vvuk avatar wayneca avatar yonet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webxr's Issues

State and events for interstitial/warning screens

Issue by brianchirls
Thursday May 19, 2016 at 02:35 GMT
Originally opened as MozillaReality/webvr-spec#34


Over at webvr-polyfill, there's been a request for events that would fire when the Cardboard interstitial appears and disappears. I think I read @toji say somewhere that the interstitial would be built into Chrome on Android. It occurs to me that this is analogous to the Oculus safety screen and that it would be useful to know when these screens are blocking the display so the user doesn't miss any content.

Does anybody know if the Oculus SDK allows detection of this sort of thing?

It seems that there should be a boolean property on the VRDisplay that tells whether the screen is blocked by a warning and an event that fires when this property changes. Maybe something analogous to document.hidden. Alternatively, onvrdisplaypresentchange could wait to fire until the interstitial is cleared, but that might not make sense if different platforms vary enough in their behavior. For example, if Chrome/Android/Cardboard (or the polyfill) shows an interstitial when you tilt the device too close to vertical, then the latter approach wouldn't work.

I could imagine platforms in the future popping up warnings for various safety and comfort reasons, like: an epilepsy blocker if it detects bright flashing light, or an interruption if the frame rate drops too low.

VRSource/VRLayer hinting for devices that want to take over the back buffer?

First, take a look at https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx and the section on processing camera updates with respect to the back buffer, Back buffers can change from frame to frame. Your app needs to validate the back buffer for each camera, and release and recreate resource views and depth buffers as needed.

I think the current specification allows for an indirect ability to optimize rendering through to the device, but the intricacies of various devices mean that each will have to jump through a different set of hoops to make it happen in a compatible and interoperable way.

We should discuss mechanisms for allowing devices to create optimized surfaces that don't require intermediate copies and perhaps further optimizations such as disabling or denying any sort of texture read-back. For this I think we want to continue using the "canvas" as the currency and then allowing a developer to get a rendering context back from said canvas.

enum CanvasThreading {
    "default",
    "threaded"; // "offscreen" ? 
}

partial interface VRDevice {
    // Option #1 - device creation
    // More flexible since allows binding to go through device specific paths
    // Enables creation of devices and surfaces optimized to cross process rendering, etc...
    VRSource? createDeviceLayer(CanvasThreading canvasType = "default");
}

// Option #2 - device replacement of back-end resources
// Challenging depending on the current state of the VRSource which may already be part
// of a normal rendering pipeline.
dictionary VRLayer {
    bool? allowDeviceOptimizations = false;
}

hasOrientation (& hasPosition) flags

Issue by mkeblx
Wednesday Apr 06, 2016 at 21:39 GMT
Originally opened as MozillaReality/webvr-spec#30


What is the use case of these VRDisplayCapabilities flags on VRDisplay? Especially hasOrientation as hard to imagine HMD not having orientation tracking.

Also note specifically renamed from VRDevice to VRDisplay to focus spec on covering use case of HMDs (mainly) and not generic tracked VR devices.

Additionally can be figured out by null or not state of orientation and position data.

[CLOSED] Proposing we drop VRPose.frameID

Issue by toji
Wednesday Mar 30, 2016 at 04:00 GMT
Originally opened as MozillaReality/webvr-spec#26


I brought up some questions about VRPose.frameID earlier in the spec development, but I don't feel they were addressed due to time constraints. Today I received an email from developers at Samsung that effectively brought up the same points. Specifically:

  • It's not clear what this value should be if you fetch multiple poses from getImmediatePose()
  • I'm not sure how developers should use this value, and if it's something that the browser needs internally it probably shouldn't be exposed to Javascript.
  • If a user wants to create a synthetic Pose to pass into submitFrame I don't know what they'd be expected to put in the frameID.
  • The timestamp field is more valuable for determining things like latency.

If this field has proven useful for the MozVR team I'd love to hear more about it, but lacking further information it feels unnecessary and I think it should be dropped.

Test suites in scope?

Currently, we have the following TBD text in the Test Suites and Other Software section of the charter draft inherited from the CG charter template:

Test Suites and Other Software

{TBD: State whether test suites or any other software will be created for any Specifications and list the relevant licenses. For information about contributions to W3C test suites, please see Test the Web Forward, and take note of the W3C's test suite contribution policy and licenses for W3C test suites. If there are no plans to create a test suite or other software, please state that.}

I'm aware of people who indicated interest in help with test suite creation, so unless I hear concerns, I'd suggest we update this section to say:

Test Suites and Other Software

The group MAY produce test suites to support the Specifications. The W3C's test suite contribution policy and licenses for W3C test suites apply.

[CLOSED] eyeTranslation.w must be one

Issue by Codes4Fun
Saturday Sep 26, 2015 at 06:49 GMT
Originally opened as MozillaReality/webvr-spec#8


I've noticed this issue with current implementations of webvr, they have eyeTranslation.w set to zero. There are two problems with this:

  1. If you transform eyeTranslation with a 4x4 matrix, it will not be translated by the matrix.
  2. If you toss eyeTranslation into a 4x4 matrix the w being zero will mean all points transformed by the matrix will have their w set to zero.

w=1 is meant for positional vectors, while w=0 is meant for direction vectors that you don't want to translate like surface normals, velocities, etc.

I would think this means that either eyeTranslation not have a w or require w to always be one.

sittingToStandingTransform may be a misnomer?

Issue by toji
Wednesday Mar 02, 2016 at 21:45 GMT
Originally opened as MozillaReality/webvr-spec#18


A bit of feedback I've received is that the concepts of "sitting space" and "standing space" that we've borrowed from OpenVR may not be accurate representations of the values we're providing. For example: It's definitely possible to stand in place while using an Oculus Rift, and you probably want to when using controllers like Touch. Since there's no sense of where you are in relation to the room, this would still be reported in "sitting space", though. Similarly, you may want to sit down with a Vive but still have the scene oriented to your room using "standing space." The actual values reported right now in either case should be fine, but the verbiage is weird.

I propose that we run with the verbiage of "stage" that we've already defined. So sittingToStandingTransform becomes stageTransform. We'd also have to come up with a term for the default space ("relative" comes to mind but may be too vague) and change some verbiage in the spec, but otherwise everything continues to work the way it does now.

Or we could just decide that this is a silly thing to worry about and move on. :)

Replace offset by translation

Suggestion: replace "offset" by "translation" (here), or change type the "offset" attribute (Float32Array => float).

  1. The "offset" attribute is a three-element array. It is Float32Array, but "offset" is distance, only number.
    It may cause misunderstanding.
  2. "translation" is better name of the attribute than "eyeTranslation". Because it is eye property. Adding "eye" prefix is not needed.

Import as issues mailing list discussions (from web-vr-discuss + public-webapps)

I went through all the archives, plus my inbox, and here are the discussions I thought were relevant to API topics:

[gamepad] Missing VRPose for tracked controllers
https://lists.w3.org/Archives/Public/public-webapps/2016AprJun/0078.html

[webvr] [gamepad] Missing VRPose for tracked controllers
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-May/001108.html

[webvr] Adding a VRPose to the Gamepad API
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-May/001109.html

[gamepad] New feature proposals: pose, touchpad, vibration
https://lists.w3.org/Archives/Public/public-webapps/2016AprJun/0052.html

[web-vr] Render Targets and WebVR
https://mail.mozilla.org/pipermail/web-vr-discuss/2015-March/000608.html

[webvr] Send multiple canvas to HMD
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-April/001080.html

[webvr] Event to indicate hardware has requested VR presentation?
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-April/001096.html

[web-vr] Proposing WebStereo
https://mail.mozilla.org/pipermail/web-vr-discuss/2016-February/000990.html

[web-vr] Proposed Verbiage change: Position => Pose
https://mail.mozilla.org/pipermail/web-vr-discuss/2015-December/000936.html

[web-vr] Spec questions
https://mail.mozilla.org/pipermail/web-vr-discuss/2015-March/000610.html


@anssiko recommends the following:

I would probably just do that manually with credits to the original author. A new issue per mail, with a link to the mozvr mailing list archive. If there's a mail thread with relevant content, I might manually cherry-pick the substantial content from the thread to the issue. Anything goes, as long as we're clearly crediting the original author.

Exposing standard, monoscopic monitors as Display devices

In Primrose, it worked really well for my purposes to polyfill in the 2D monitor as a "VR Display" that had only one eye parameter. I returned those parameters when calling getEyeParameters with "left", and I passed null when calling with "right" (though see note on Magic Window configurations). I collect all non-null eye parameters into an array and then I loop over that array as I render. The requestPresent method polyfilled to calling the standard Full-Screen workflow.

Because of this, I didn't have to create special code paths to handle the technically-only-marginally-different types of displays. There's only one path of code for displays. And to support spectator mode--showing a mono rendering on the monitor--I could theoretically only have to create an array of displays over which to loop. Such a setup should also be more future-proof against CAVE setups.

As opposed to how webvr-polyfill works, I also didn't implement touch-panning or mouse/keyboard movement in that display for the same reason: it would create two code paths for handling input. I already support mouse/keyboard for desktop users on tethered HMDs, and touch-panning is also useful for non-stereo use-cases with tablets in Magic Window configurations.

On Magic Window configurations: technically, a smartphone is capable of both Magic Window and Google Cardboard, but it's not multiple displays, it's just multiple rendering configurations for one, common display. So instead of returning a mono-display configuration through one-of "left" or "right" from getEyeParameters, it probably makes sense to have a completely separate "eye parameter" for "center" or "mono".

I'm using webvr-polyfill now because I got tired of trying to keep device fusion working in my configuration, so I don't have that feature in my current DEV branch anymore, but you can see it in the master branch (and thus, on primrosevr.com) right now as I don't have the new deployment ready yet. I will probably end up polyfilling the concept back in, either by trying to get webvr-polyfill to adopt the idea, forking webvr-polyfill, or pre-wrapping navigator.getVRDisplays before webvr-polyfill can get to it.

Clarify what happens with `VRLayer` is missing a `source`

Right now the spec does not state how to handle a VRLayer passed into requestPresent without a source property or what to do if that source is not valid. According to @toji, Chrome resolves the promise and does nothing.

I think it makes sense to reject the promise, which would be more helpful for debugging that failing silently. Either way, the behavior should be documented.

[CLOSED] rename VRLayer

Issue by andreasplesch
Saturday Mar 19, 2016 at 05:32 GMT
Originally opened as MozillaReality/webvr-spec#24


Layering requires multiple layers. However, currently only a single VRLayer can be presented by the display.
Therefore VRSource would be a better fit as a suggestion. There are probably better names to be discovered as well.

Updates to requestPresent to allow for 2 VRLayer's for devices that do separate texture per camera

Please refer to https://developer.microsoft.com/en-us/windows/holographic/rendering_in_directx and specifically the section on "Render to each camera".

The current VRLayer setup is optimized for rendering to a combined left/right eye surface. This means that a developer can too easily take a dependency on being able to render to both eyes at the same time simply by crossing over the texture center line. While this may not produce amazing results, it could be a dependency.

I'd like to extend VRLayer to take 2 layers, with the restriction that the VRSource objects for those layers are not the same and perhaps even that the bounds are the entire contents of the layer.

In combination with #51 this might mean adding a createDeviceLayers equivalent,

Note: These aren't final proposals, but mean to spur conversations.

[CLOSED] ambiguity on number of VRLayers

Issue by andreasplesch
Saturday Mar 19, 2016 at 05:25 GMT
Originally opened as MozillaReality/webvr-spec#23


While the plural in getLayers as well as the word layer itself indicate that more than one Layer can be presented, presentRequest and the wording of the explanation of VRLayer indicate that only a single Layer can be presented.

This ambiguity can lead to confusion and should be removed.

Although implementations currently only allow for a single Layer, the spec. may be more demanding and allow multiple Layers.

Dependencies or Liaisons?

The last remaining todo in the Charter draft is the Dependencies or Liaisons section. Do folks foresee any significant dependencies on other groups (inside or outside W3C) we should mention in this section of the Charter? For examples from other Community Groups, see e.g. the Web NFC Community Group Charter and Web Bluetooth Community Group Charter.

I suggest we add at least the W3C Device and Sensors Working Group that is defining the Generic Sensor API, a framework for exposing sensor data to the Web in a consistent way, to be used by concrete sensors (e.g. ambient light, proximity, accelerometer, gyroscope, magnetometer). This could be used to improve the webvr-polyfill, and/or enable better code reuse in browsers (Chromium implementation is in progress, starting with ALS, but moving to other concrete sensors soon). There are also reuse opportunities on the spec-level.

I can submit a PR with proposed changes after getting everyone's feedback.

Linking between VR web pages

Issue by brianchirls
Thursday Mar 17, 2016 at 15:09 GMT
Originally opened as MozillaReality/webvr-spec#22


I seem to recall some discussion of enabling linking directly from one VR web site to another without having to stop presenting. The spec as it is doesn't seem to handle that.

This is a potentially tricky case, so I understand wanting to hold off on implementation until more progress is made in the field. But has there been any discussion about what this might look like in an API? Are we confident that the API as it currently stands leaves room for that in the future?

[CLOSED] Multiple PositionSensorVRDevices at once

Issue by borismus
Monday Mar 23, 2015 at 18:53 GMT
Originally opened as MozillaReality/webvr-spec#2


Multiple position sensors controlling the observer seems to be a common situation.

Most commonly, you may be using the Rift's gyroscope for rotation, but also want to support mouse look and maybe arrow keys to look around.

Also, I'd like to build a 6DOF system for desktop where we use headtrackr for the positional DOFs, but also be able to mouse look. This case is easier since one of the devices only provides rotation, and the other provides only position.

Looks like the spec intends VRPositionState to report in absolute coordinates, which makes it hard to combine multiple states (eg. the rift case). Maybe switching to relative would be helpful for this? Otherwise how do we handle the multiple position sensor case?

Standing Space without Room Scale

Issue by toji
Monday Mar 21, 2016 at 19:39 GMT
Originally opened as MozillaReality/webvr-spec#25


There's a rumor that the Oculus 1.0 SDK will include the ability to get coordinates relative to the floor. That's great news if true, but I don't think it will include chaperone-style room bounds. Even if not true this sort of capability is something we should plan for.

It's not clear how this sort of capability should map to our StageParameters. I suppose the easy way out is to simply set sizeX and sizeZ to 0, but there's also a difference in how it would interact with resetPose(). With a Vive the transform will always update the pose to be oriented to the room, but with a Rift resetPose() will still re-orient you.

I'd appreciate any thoughts on the matter!

Define behavior when tab loses focus while presenting

Issue by brianchirls
Thursday Mar 17, 2016 at 15:05 GMT
Originally opened as MozillaReality/webvr-spec#20


What should happen when you switch to another tab or application while presenting in VR to an external display?

The spec says this:

In the event that the content process terminates unexpectedly, the browser will not exit VR mode. The VR compositor will destroy the content layer while continuing to present the trusted UI elements of the browser.

...which seems to indicate a desire not to abuse the user with a jarring experience. I think it's the right call.

It also says this:

The HMD pose and other VR inputs are only updated for the focused WebVR page. This can be implemented in the same manner as keyboard and mouse input.

The former seems to imply that there will be some pose-tracked 3D browser UI elements, so maybe the solution is to display that in place of the formerly active tab. So would you just resume presenting the web page as soon as it regains focus? What if you're switching between two tabs that are both VR enabled?

Any ideas?

Document `allowvr` attribute for iframes

Issue by cvan
Tuesday Mar 01, 2016 at 21:38 GMT
Originally opened as MozillaReality/webvr-spec#17


We want to add a new attribute allowvr that can be set on <iframe> (ร  la allowfullscreen).

This attribute can be set to true if the frame is allowed to access VRDisplay objects. When false, navigator.getVRDisplays() and navigator.activeVRDisplays will resolve and return empty sequences.

[CLOSED] The right way to enter VR mode

Issue by borismus
Thursday Mar 26, 2015 at 21:57 GMT
Originally opened as MozillaReality/webvr-spec#3


The current approach of going into VR mode via requestFullScreen({vrDisplay: hmd}) seems a little bit limiting. Why should full-screen be associated with VR? Does it still make sense for direct-to-rift mode, where only the rift display should be affected? How about for Cardboard-style uses?

What if we had something like hmd.startVR() instead?

/cc: @dmarcos

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.