Giter VIP home page Giter VIP logo

openxr-docs's Introduction

OpenXR™ API Documentation Project

This repository contains sources for the formal documentation of the OpenXR API. This includes:

  • the OpenXR API Specification

  • OpenXR header files (generated from the specification)

  • related tools and scripts.

The authoritative public repository is located at https://github.com/KhronosGroup/OpenXR-Docs/. It hosts public Issue tracker, and accepts patches (Pull Requests) from the general public.

Directory Structure

The directory structure is as follows:

README.adoc           This file
COPYING.md            Copyright and licensing information
CODE_OF_CONDUCT.md    Code of Conduct
specification/        Specification - files to generate the spec
include/openxr/       Static OpenXR header (not generated from the Registry)

Building the Specification and Reference Pages, and Regenerating Headers

The document sources are marked up in Asciidoctor format, and we use asciidoctor and related toolchain components to generate output documents. See specification/README.md for more information on installing the toolchain and building the Specification.

Note about git blame

We are tracking "bulk commits" in the .git-blame-ignore-revs file, for better git blame output. Sadly it appears that web interfaces do not yet handle this file, but you can if using the command line. See --ignore-revs-file docs for details, and this blog post about ignore-revs for some useful usage details.

openxr-docs's People

Contributors

aceroc avatar bl4ckb0ne avatar brent-insko-intel avatar christophhaag avatar danwillm avatar digitec avatar emersion avatar expipiplus1 avatar fredemmott avatar grinchman avatar hjiang36 avatar jherico avatar leonfretter avatar lewa-j avatar lyh-dm avatar mbucchia avatar morallo avatar phildo avatar puzzlepaint avatar rachelhuang avatar ralith avatar rpavlik avatar s-ol avatar stephenatwork avatar utzcoz avatar wallbraker avatar willfu avatar yl-msft avatar zhuoyitao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openxr-docs's Issues

OS support

hi 🤗

I am missing some information on the start page about os support. is this something that I can use to utilize windows VR on Linux?

Since it is a cross-platform abstraction layer, it should mean that I can use every supported goggle on each supported OS

That would be superb, why are you hiding this feature on your homepage?

Vulkan graphics requirements API version semantics are ambiguous

XrGraphicsRequirementsVulkanKHR's minApiVersionSupported is specified as:

the minimum version of Vulkan that the runtime supports.

However, Vulkan has two different notions of version: the instance version, and the physical device version. It doesn't seem to be specified which of these should be checked against. The device version is usually the most important, but since the XR runtime is responsible for selecting the device it might be expected to perform that check itself.

An application can probably be safe by checking both versions, but explicitly documenting the desired behavior would reduce the odds of people making an incorrect assumption as to which matters, as I've inadvertently done in the past.

input/output structure usage is not consistently specified

The documentation for xrGetVisibilityMaskKHR says:

visibilityMask is an input/output struct which specifies the view mask.

However the documentation for xrGetControllerModelPropertiesMSFT says:

properties is a XrControllerModelPropertiesMSFT returns the properties of the controller model

It would be nice if the language of the former could be used for every input/output struct.

Additionally it would be very helpful to have input/output parameter clearly marked in the XML specification. I don't think this is done for Vulkan because structs are purely input or output.

Section 10.1: xrEnumerateSwapchainImages images parameter inconsistency

10.1. Swapchain Image Management
Parameter Descriptions

  • images is a pointer to an array of graphics API-specific XrSwapchainImage structures based off of XrSwapchainImageBaseHeader. It can be NULL if imageCapacityInput is 0.

vs

Valid Usage (Implicit)

  • If imageCapacityInput is not 0, images must be a pointer to an array of imageCapacityInput XrSwapchainImageBaseHeader structures

images should never be a pointer to an array of XrSwapchainImageBaseHeader structures, it must be an array of (larger) graphics API-specific structures that each start with a base header.

Should this state that they all must be of the same type?

enum array output parameter validation of xrEnumerate functions is awkward

The xrEnumerateViewConfigurations valid usage section states:

Valid Usage (Implicit)

  • If viewConfigurationTypeCapacityInput is not 0, viewConfigurationTypes must be a pointer to an array of viewConfigurationTypeCapacityInput XrViewConfigurationType values

Consequently, the core validation layer checks that all array members have a valid XrViewConfigurationType enum value before the function is called, even though viewConfigurationTypes is purely an output parameter. Since there is no enum value for an undefined / uninitialized view configuration type, the user has to awkwardly pick a valid value and initialize the array with that:

XrViewConfigurationType viewConfigurations[viewConfigurationCount];
for (uint32_t i = 0; i < viewConfigurationCount; i++) {
    viewConfigurations[i] = XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO;
}
result = xrEnumerateViewConfigurations(instance, systemId,
    viewConfigurationCount, &viewConfigurationCount, viewConfigurations);

This enforced initialization loop is confusing at best, especially if the runtime doesn't even support XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO. It would be better to allow passing an uninitialized or at least zero-initialized viewConfigurations array.

xrEnumerateReferenceSpaces has the same issue:

Valid Usage (Implicit)

  • If spaceCapacityInput is not 0, spaces must be a pointer to an array of spaceCapacityInput XrReferenceSpaceType values

XrPath behavior incorrectly described as "undefined"

The definition of XrPath states:

An XrPath that is received from one XrInstance may not be used with another. Such an invalid use may be detected and result in an error being returned, or it may result in undefined behavior.

I understand that Khronos' intention is that the behavior is, in fact, defined to be either an error or as an unspecified valid path had been used. However, the term "undefined behavior" is widely recognized, and indeed used elsewhere in the spec, to mean completely arbitrary consequences, such as dereferencing an invalid pointer. The language should be corrected to something like "behavior as if an unspecified valid path was used," thereby providing a closed set of possible outcomes. "Unspecified behavior" is another possibility, but the meaning is less clear and it may be useful to guarantee that e.g. aborting the process isn't permitted, if that's indeed the intention.

Awkward synchronization requirements imposed on Vulkan-capable implementations

xrAcquireSwapchainImage is the only function which the application must call before submitting commands that access a swapchain image which is also permitted by XR_KHR_vulkan_enable to access the application's graphics queue. Sharing swapchain images with a compositor requires that the OpenXR implementation submit acquire/release barriers enclosing the application's work on the application's queue, corresponding to release/acquire barriers on the compositor's queue. Hence, xrAcquireSwapchainImage must submit an acquire barrier to a Vulkan application's queue.

Such a barrier must have an execution dependency on the compositor's corresponding "release" barrier. Because a traditional Vulkan semaphore cannot be waited on until a submission has been made that will signal it, implementing such a dependency with one requires blocking the CPU in xrAcquireSwapchainImage on the compositor's submission of its corresponding release barrier, which seems to defeat the purpose of a separate xrWaitSwapchainImage call.

This can be worked around with timeline semaphores, but that prevents compatibility with older drivers, and presumably wasn't the intention of the spec as it leaves xrWaitSwapchainImage as a noop. Alternatively, semaphores could perhaps be replaced with VkEvents that are signaled after blocking the CPU in xrWaitSwapchainImage, but this is unconventional, and events are thought to be poorly supported by most Vulkan implementations.

What is the difference between parentstruct and structextends?

These seem to be used in similar number in the specification. Confusingly the line parentStructs = type_elem.get('structextends') appears in reg.py.

I had a cursory search in the repository but it wasn't immediately clear what the semantic difference is.

Correct time to call xrBeginFrame is unclear

The spec says:

Applications should call xrBeginFrame right before executing any graphics device work for a given frame

"graphics device work" is not well defined. For Vulkan, the most plausible interpretation is that it should be called immediately prior to the first vkQueueSubmit related to rendering the frame. However, another interpretation might be to call it before beginning to record command buffers.

The reference guide further confuses matters by specifying that xrBeginFrame be called immediately after xrWaitFrame, but this seems unlikely to be correct, as it would tend to include significant CPU work. Further, if that were the intended pattern there would be no point to having a separate function call. It also clearly sequences it prior to the swapchain image acquire/wait operations, which are certainly not GPU work for the current frame and may block the thread.

My understanding is that xrBeginFrame's purpose is to improve a runtime's ability to reason about the time required by an application to render a frame. If applications aren't calling it at a consistent time with regard to other operations, that purpose will be defeated, so specific language should be added defining precisely when xrBeginFrame should be called with regard to per-frame CPU work, GPU work (with a clearer definition of its own), and other OpenXR calls, and the reference guide updated to match.

Section 2.9: Do runtimes have to detect invalid handles or not?

In the fundamentals chapter it is stated that:

2.9. Handles
Runtimes are not required to detect invalid handles. Usage of an invalid handle may result in undefined behavior.
All functions that take a handle parameter may return XR_ERROR_HANDLE_INVALID.

This looks like runtimes are allowed to just cast internal object pointers to handles and to leave invalid handle detection to the validation layers. But in several function descriptions the opposite is stated, for example:

7.3.3. xrCreateActionSpace
The runtime must return XR_ERROR_HANDLE_INVALID if the action provided in action is not a valid action.

Is it must or may? Are runtimes required to implement invalid handle detection or not?

Why doesn't XrCompositionLayerColorScaleBiasKHR `structextend` the children of XrCompositionLayerBaseHeader?

At the moment it seems as though there's no valid place to use XrCompositionLayerColorScaleBiasKHR. I'm forbidden for extending any structs other than XrCompositionLayerBaseHeader with it. But I can't use XrCompositionLayerBaseHeader directly!

A simple fix for (what I think is the intended behaviour) would be to add the children of XrCompositionLayerBaseHeader to the structextends attribute of XrCompositionLayerColorScaleBiasKHR

Build instructions omit requirement for python3-jinja2

I'm trying to generate the headers via make header on WSL, so I followed the "Debian-derived Linux (including Ubuntu and Ubuntu on WSL)" instructions from the readme. But upon running make header in the specification folder, I got the following error:

[genxr]       registry/xr.xml -> out/1.0/openxr/openxr.h
[genxr]       registry/xr.xml -> out/1.0/openxr/openxr_platform.h
[genxr]       registry/xr.xml -> out/1.0/openxr/openxr_reflection.h
Traceback (most recent call last):
  File "./scripts/genxr.py", line 473, in <module>
    genTarget(args)
  File "./scripts/genxr.py", line 363, in genTarget
    diagFile=diag)
  File "<local path>/OpenXR-Docs/specification/scripts/creflectiongenerator.py", line 56, in __init__
    self.env = make_jinja_environment(file_with_templates_as_sibs=__file__)
  File "<local path>/OpenXR-Docs/specification/scripts/jinja_helpers.py", line 99, in make_jinja_environment
    from jinja2 import Environment, FileSystemLoader
ModuleNotFoundError: No module named 'jinja2'
make: *** [Makefile:189: out/1.0/openxr/openxr_reflection.h] Error 1

I could fix it by running sudo apt install python3-jinja2. That package should probably be added to the instructions in the readme.

Clarify semantic of top level user paths

xrGetCurrentInteractionProfile takes a topLevelUserPath argument, which is not totally clear to me how it should look like in practice. My initial assumption would be a /user path (6.3.1) but it also states it needs to follow the device input subpath syntax, which starts to confuse me as the semantic of 'top level' is unknown to me in this context.

XrPath is excessively UB-prone

The spec states:

An XrPath that is received from one XrInstance may not be used with another. Such an invalid use may be detected and result in an error being returned, or it may result in undefined behavior.

and

These XrPath values are only valid within a single XrInstance, and must not be shared between instances. Applications must instead use the string representation of a path in their code and configuration, and obtain the correct corresponding XrPath at runtime in each XrInstance

As the author openxrs, I'm trying to develop memory-safe Rust bindings for OpenXR. This means that every API call that isn't explicitly an unsafe escape hatch must not be capable of invoking undefined behavior. The above restrictions make this very difficult, as either anything that operates on an XrPath is potentially unsafe, representing a significant portion of the API. This makes it much harder to audit downstream code for safety violations. My only alternative is to represent XrPaths with a wrapper that tracks the associated instance, but that would undermine their value as compact lightweight tokens that can be trivially copied and stored.

Most implementations will likely implement XrPath as indexes into an internal table that can be bounds-checked, and path type-checking is already required, so guaranteeing defined behavior here seems reasonable.

CC @rpavlik

Questions about Session lifecyle state

In the state transition diagram , when the application calls xrEndSession, Runtime will switch from the Stopping state to the Idle state, and then switch to the Exiting state when the user issues an exit request.
I don't quite understand how user quite request affects state switching. Which openxr function is called by the application?
My understanding is that when the application calls xrEndSession, Runtime switches from Stopping to Exiting? is it right?Thanks!

Buffer Size Parameter elementCountOutput inconsistency

The Buffer Size Parameter Behavior section states:

2.11. Buffer Size Parameters

Buffer Size Parameter Behavior

  • Independent of elementCapacityInput or elements parameters, elementCountOutput must be a valid pointer, and the function sets elementCountOutput

But the Valid Usage (Implicit) sections of many functions seem to imply that their elementCountOutput parameter may be NULL. For example:

7.3.1. xrEnumerateReferenceSpaces

Valid Usage (Implicit)

  • If spaceCountOutput is not NULL, spaceCountOutput must be a pointer to a uint32_t value

Why not just:

  • spaceCountOutput must be a pointer to a uint32_t value

width and and height

XrCompositionLayerQuad has a "repeated and" in its member description.

This should be added to checkMarkup right next to "repeated the"

Clarify restrictions imposed on Vulkan application access to swapchain images

The spec says:

Once an application is done writing to a swapchain image, it is released.

"done writing" is ambiguous. For Vulkan, the ideal arrangement is that the application releases an image after submitting work, which will write to the swapchain image, to the queue identified by the XrGraphicsBindingVulkanKHR it supplied. However, the actual physical write will probably not have occurred at that point, so a strict reading of the spec might require that the application block the CPU until the work is complete, which would be a disaster for performance.

Similarly, the VK_KHR_vulkan_enable text refers to the state of the swapchain image in present tense, e.g.

When an application releases a swapchain image [it must have] a memory layout compatible with VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL

whereas in practice this is not true until after submitted work, which should not block the CPU, has been executed.

A reasonable implementation will manage synchronization by inserting image memory barriers and semaphore signal/await operations inside the OpenXR API calls permitted to access the Vulkan queue in question. The spec should explicitly guarantee that applications are not required to block the CPU on their graphics work or perform any additional synchronization beyond that already explicitly required by VK_KHR_vulkan_enable.

Color space support

The current text has only this to say regarding color spaces:

Rendering operations involving composition of submitted layers should be assumed to be internally performed by the runtime in linear color space. Images submitted in sRGB color space must be created using an API-specific sRGB format (e.g. DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, GL_SRGBA, VK_FORMAT_R8G8B8A8_SRGB) to apply automatic sRGB-to-linear conversion when read by the runtime. All other formats will be treated as linear values.

This is ambiguous about what linear color space is used to interpret application images, since "linear" defines only a (trivial) transfer function, not a set of primaries. A reasonable reader could infer that linear sRGB is intended, but this should be made explicit.

Furthermore, no mechanism is provided for an application to supply frames in color spaces other than sRGB regardless of transfer function, i.e. color spaces with different RGB primaries. This is needed to support the full output range of future and existing devices, such as the Oculus Quest, which supports Rec.2020 natively and allows its use through the Oculus SDK.

Vulkan presents useful prior art for how such an interface might be defined. In particular, applications can query supported color spaces for a given surface with vkGetPhysicalDeviceSurfaceFormats2KHR and select a desired one when constructing a swapchain. VkColorSpaceKHR as extended by VK_EXT_swapchain_colorspace defines a reasonable set of possible color spaces. An implementation which advertises support for a color space on some surface guarantees that it can display images in that color space, either natively or by applying a transform, e.g. to correctly display sRGB images in a wider-gamut display.

It might be desirable to mandate support for some color spaces. Subject to the ambiguity described above, support for nonlinear sRGB is probably de facto mandatory as an OpenXR runtime must be well behaved when operating in un-extended 1.0 mode. A pass-through color space is also useful for applications which can determine characteristics of the device via a side channel such as platform-specific APIs or hardcoded knowledge of a specific device, where such characteristics do not necessarily correspond exactly to one of the standard color spaces. This is particularly desirable as it exposes the full capabilities of unforeseen future displays.

ViewConfigurationProperties viewConfigurationType member is unnecessary

An XrViewConfigurationProperties is obtained by passing a XrViewConfigurationType to xrGetViewConfigurationProperties. Having it return the same value you passed in back to you is both unnecessary (you necessarily already have that information) and confusing (my first thought was "wait, can this be something else?"). If that is indeed the only function of the member, it should be removed; otherwise, the cases where it can convey new information should be illustrated.

Grip spaces should have diagrams

https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#_standard_identifiers

image

The +Y axis is defined as "the Y axis implied by the given right handed X and Z axes". This would be fine, except:

  • It's a bit tricky to use the right handed thumb rule on your own hands
  • The Z axis is defined as -Z
  • We have X and Z defined, and need to find Y
  • The X axis points in a different direction for each hand

All these put together is a recipe for sign error. At the very least we should say that it "roughly points inwards towards the arm", but a diagram would be ideal.

Naming style guide violations: XR_EXTX_overlay

There are two enum types in this extension whose value names violate the style guide. Given that this is tagged as provisional, I suspect we can simply fix these (without leaving the old ones as aliases) for a release in the near future.

See the PR by a community member that brought this to my attention: KhronosGroup/OpenXR-SDK-Source#171


Messages for XrOverlaySessionCreateFlagBitsEXTX

Error: registry/xr.xml:1342: Got an enum value whose name does not match the pattern: got XR_OVERLAY_SESSION_CREATE_RELAXED_DISPLAY_TIME_EXTX but expected something that ended with _BIT_EXTX due to typename being XrOverlaySessionCreateFlagBitsEXTX


Messages for XrOverlayMainSessionFlagBitsEXTX

Error: registry/xr.xml:1339: Got an enum value whose name does not match the pattern: got XR_MAIN_SESSION_BIT_ENABLED_COMPOSITION_LAYER_INFO_DEPTH_EXTX but expected something that started with XR_OVERLAY_MAIN_SESSION due to typename being XrOverlayMainSessionFlagBitsEXTX
Error: registry/xr.xml:1339: Got an enum value whose name does not match the pattern: got XR_MAIN_SESSION_BIT_ENABLED_COMPOSITION_LAYER_INFO_DEPTH_EXTX but expected something that ended with _BIT_EXTX due to typename being XrOverlayMainSessionFlagBitsEXTX

Naming style guide violation: XrPerfSettingsNotificationLevelEXT values

A newly-enhanced xml-consistency script identified these issues:

Messages for XrPerfSettingsNotificationLevelEXT

Error: registry/xr.xml:1256: Got an enum value whose name does not match the pattern: got XR_PERF_SETTINGS_NOTIF_LEVEL_WARNING_EXT but expected something that started with XR_PERF_SETTINGS_NOTIFICATION_LEVEL due to typename being XrPerfSettingsNotificationLevelEXT
Error: registry/xr.xml:1256: Got an enum value whose name does not match the pattern: got XR_PERF_SETTINGS_NOTIF_LEVEL_IMPAIRED_EXT but expected something that started with XR_PERF_SETTINGS_NOTIFICATION_LEVEL due to typename being XrPerfSettingsNotificationLevelEXT
Error: registry/xr.xml:1256: Got an enum value whose name does not match the pattern: got XR_PERF_SETTINGS_NOTIF_LEVEL_NORMAL_EXT but expected something that started with XR_PERF_SETTINGS_NOTIFICATION_LEVEL due to typename being XrPerfSettingsNotificationLevelEXT

Documentation for xrSetInputDeviceStateXxxxEXT APIs is incorrect/copied from first API

Doc location: https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#XR_EXT_conformance_automation

First API has correct documentation for fourth parameter: xrSetInputDeviceActiveEXT


Parameter Descriptions

    session is the XrSession to set the input device state in.

    interactionProfile is the path representing the interaction profile of the input device (e.g. /interaction_profiles/khr/simple_controller).

    topLevelPath is the path representing the input device (e.g. /user/hand/left).

    isActive is the requested activation state of the input device. <--- This Parameter here

The rest of these seem to have just copied the same parameter documentation and all show the same isActive documentation for last parameter:

xrSetInputDeviceStateBoolEXT
xrSetInputDeviceStateFloatEXT
xrSetInputDeviceStateVector2fEXT
xrSetInputDeviceLocationEXT

Incorrect "Valid Usage (Implicit)" for xrGetVulkanInstanceExtensionsKHR() and xrGetVulkanDeviceExtensionsKHR()

For the xrGetVulkanInstanceExtensionsKHR() function, the "Parameter Descriptions" documentation says:

buffer is a pointer to an array of characters, but can be NULL if bufferCapacityInput is 0. The format of the output is a single space (ASCII 0x20) delimited string of extension names.

However, the "Valid Usage (Implicit)" text below says:

If bufferCapacityInput is not 0, buffer must be a pointer to an array of bufferCapacityInput null-terminated UTF-8 strings

This conflicts with the above. The correct statement is that buffer is a pointer to a single string, not to an array of strings.

The same applies to xrGetVulkanDeviceExtensionsKHR(), which has a buffer parameter with analogous use.

Begin/EndFrame session parameter missing from list of externally synchronized parameters

10.4 states:

While xrBeginFrame and xrEndFrame do not need to be called on the same thread, the application must handle synchronization if they are called on separate threads.

However, there is no "Thread Safety" block associated with either function, and section 2.3 does not mention them. The required behavior is clear, but should probably be cross-referenced for ease of discoverability like the other cases.

11.5.4. xrGetActionStatePose missing XR_ERROR_PATH_INVALID return value

In Section 11.5.4. Pose Actions, the function description for xrGetActionStatePose does not explain what the runtime does when the application passes an invalid subactionPath parameter, or a valid subactionPath that was not part of the subactionPaths the action was created with. It only states:

See XrActionCreateInfo for a description of subaction paths, and the restrictions on their use.

There is no mention of subactionPath in the Valid Usage (Implicit) section. In the XrActionCreateInfo description it is stated that:

If subactionPaths is specified and any of the following conditions are not satisfied, the runtime must return XR_ERROR_PATH_INVALID:

  • Each path provided is one of:
    • /user/head
    • /user/hand/left
    • /user/hand/right
    • /user/gamepad
    • /user

This implies that only those five well-known paths are valid to be used as subactionPath parameter, but the Return Codes section for xrGetActionStatePosedoes not include the XR_ERROR_PATH_INVALID return value in case of a path that is not in this list, and it is not explained what will be returned if the path is in this list but was not given when creating the action.

Meaning of "invalid" position/orientation unclear

When components of an XrPosef are deemed invalid due to XrViewStateFlags or XrSpaceLocationFlags, it's unclear what guarantees are made about the components in question. Possibilities include:

  • They aren't written at all, and hence contain whatever they did before the call was made, including possibly remaining uninitialized.
  • Undefined data, possibly including a trap representation, is written, so reading from them is UB.
  • Garbage data is written, so initialization is guaranteed and reading is defined, but there may be NaNs or a quaternion may be of non-unit length
  • Well-formed data is written, i.e. all floats are finite and the quaternion is of unit length, but no guarantees are made about the usefulness of this data.

Given the possibility of undefined behavior, the exact outcome should be made explicit.

Missing event to notify of changes in recommended view image size

This is needed to allow applications to respond to changes in runtime configuration (e.g. SteamVR's application supersampling slider) without restarting or polling xrEnumerateViewConfigurationViews every frame. Applications should be encouraged to respond to such configuration changes dynamically so users can judge the visual impact of settings changes.

CC @rpavlik

GL_SRGBA

Filing an issue to correspond to the pull request #37 that has been here for a while, so that it will get synced over to gitlab.

cc @jherico

The spec specifies that "With an OpenGL-based graphics API, the texture formats correspond to OpenGL
internal formats.", but the example used for for GL is GL_SRGBA, which is not an internal format, but corresponds to a typeless format.

OpenXR Vulkan swapchains cannot be used with VK_KHR_imageless_framebuffer

The VK_KHR_imageless_framebuffer extension (promoted to a core feature in Vulkan 1.2) allows Vulkan render pass attachments to be specified inline when a render pass is begun, making it much easier to perform pipelined rendering where one of N depth buffers is associated with one of M swapchain images just-in-time. This is desirable because it allows the application to allocate the minimum amount of depth buffer memory necessary for a given target pipeline depth, regardless of how many swapchain images the OpenXR implementation unilaterally issues.

Unfortunately, it is impossible to use this extension with OpenXR, because the image usage flags the swapchain image was created with must be known exactly, and OpenXR makes no guarantees regarding them and provides no interface to query them.

Swapchain images cannot be used as Vulkan input attachments

XrSwapchainUsageFlags has no value that is guaranteed to map to VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT. This makes it impossible to perform certain kinds of efficient post-processing directly on swapchain images.

For color attachments this may not be a serious problem since many applications will render to a non-swapchain image for e.g. HDR, but for depth attachments that an application requires input attachment use for, the application must either perform a redundant copy or use less-efficient sampled image access.

Semantics of relative inputs are unclear

§6.3.2 defines:

trackball - A 2D relative input source. These sometimes include click components.

However, no other mention is made of relative input. This leaves the exact behavior of actions bound to the input ambiguous: if xrGetActionStateVector2f returns an instantaneous sample of the relative input, significant error accumulation is likely due to missed inputs. A runtime could continuously integrate inputs between calls to xrSyncActionData to prevent error accumulation, but this is not required (or perhaps even permitted?) by the spec, and is inconsistent with the presumed instantaneous sampling of other inputs. Completely lossless processing of relative inputs ultimately requires an input event queue, which OpenXR does not currently provide.

Possibly also relevant is the reservation of /user/treadmill, which implies a relative input source.

A similar problem exists for instantaneous inputs, as for example might be emitted by a device or runtime that implements gesture recognition or threshold events.

CC @rpavlik

Can length annotations for the output sizes of returned arrays be added?

For example the spec says here that the length of displayRefreshRates is displayRefreshRateCapacityInput, but this doesn't tell the whole story after the function is called! Is it possible to expand the spec with information about how many values in the array are valid (i.e. displayRefreshRateCountOutput) after the call returns?

      <command successcodes="XR_SUCCESS,XR_SESSION_LOSS_PENDING" errorcodes="XR_ERROR_INSTANCE_LOST,XR_ERROR_SESSION_LOST,XR_ERROR_RUNTIME_FAILURE,XR_ERROR_HANDLE_INVALID,XR_ERROR_SIZE_INSUFFICIENT,XR_ERROR_FUNCTION_UNSUPPORTED,XR_ERROR_VALIDATION_FAILURE">
        <proto><type>XrResult</type> <name>xrEnumerateDisplayRefreshRatesFB</name></proto>
        <param><type>XrSession</type> <name>session</name></param>
        <param optional="true"><type>uint32_t</type> <name>displayRefreshRateCapacityInput</name></param>
        <param><type>uint32_t</type>* <name>displayRefreshRateCountOutput</name></param>
        <param optional="true" len="displayRefreshRateCapacityInput"><type>float</type>* <name>displayRefreshRates</name></param>
      </command>

FWIW the Vulkan spec uses the same parameter for input and output so doesn't fall into this problem, although it's probably too late to change things now!

6.4. Missing PlayStation Move controllers

Chapter 6.4. Interaction Profile Paths is missing a PlayStation Move controller profile. These controllers have the square/cross/circle/triangle buttons mentioned in 6.3.2. Device input subpaths, but they are not arranged in a diamond shape. Further, they have a big thumb button which does not have a matching input source specified. …/input/thumbrest/click would come close, but doesn't really describe it properly.

Require or at least recommend the use of hysteresis for suggested input mappings

For actions created with ename:XR_ACTION_TYPE_BOOLEAN_INPUT when the runtime is obeying suggested bindings: Boolean input sources must: be bound directly to the action. If the path is to a scalar value, a threshold must: be applied to the value and values over that threshold will be ename:XR_TRUE. The threshold may vary from device to device or component to component and is left as an implementation detail.

I suggest that this is too vague and that mapping analog inputs to boolean actions should require that implementations ensure that potential sources of noise do not cause the input to be rapidly triggered over and over when the analog source is very close to a given threshold. Noise in this case could be either muscle twitches by a user that are detectable by hardware but below the level of the intention of squeezing or releasing, or electronic noise from an analog sensing unit.

Saying that implementation must provide some level of hysteresis on such analog->boolean mappings and should attempt to ensure that the hysteresis range is appropriate for the potential noise values from a given analog input, taking into consideration both the ergonomics of the device and it's electronic properties would be appropriate, IMO.

Are EXT functions missing the XR_ERROR_INSTANCE_LOST error return code?

Is there a system to whether or not a function can return the XR_ERROR_INSTANCE_LOST error code besides:

4.4.1. The Instance Lost Error
Once XR_ERROR_INSTANCE_LOST is first returned, it must henceforth be returned by all functions that involve an XrInstance or child handle type until the instance is destroyed.

?

For example, xrCreateInstance can return XR_ERROR_INSTANCE_LOST, which I don't understand the purpose of. But the EXT functions are missing it. For example, neither xrPerfSettingsSetPerformanceLevelEXT nor xrCreateDebugUtilsMessengerEXT contain it in their list of failure return codes.

I would have expected all EXT functions that have an XrInstance or XrSession parameter to have the XR_ERROR_INSTANCE_LOST return value.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.