Giter VIP home page Giter VIP logo

seurat's Introduction

Seurat - Documentation

What is Seurat?

Seurat is a system for image-based scene simplification for VR. It converts complex 3D scenes with millions of triangles, including complex lighting and shading effects, into just tens of thousands of triangles that can be rendered very efficiently on 6DOF devices with little loss in visual quality. It delivers high fidelity graphics on mobile VR devices. (One way to think of it is as a serving the same role as stereo panoramas on 3DoF VR devices, on 6DoF devices.)

The processing pipeline for static environments generates data for a single headbox (e.g. 1 m³ of space). Input data can be generated with any rendering system, e.g. a real-time game engine or an offline ray tracer. We have plugins for Unity, Unreal and Maya. Seurat outputs a mesh with an RGBA texture atlas, which can be rendered in any real-time engine. Dynamic content can be composited on top of the static Seurat environments.

Using Seurat

Using Seurat requires three steps:

  1. Generating the RGBD input images from your scene needed for the Seurat processing pipeline.
  2. Running the images through the pipeline to generate the output geometry and RGBA texture atlas.
  3. Importing the Seurat output into your engine of choice.

This document primarily discusses (1) and (2) – generating the inputs to the pipeline and running it. We provide plugins for Unity and Unreal Engine to simplify steps (1) and (3), and we illustrate capture from raytracers via an Autodesk® Maya® script.

You can access them here:
Seurat Unity Plugin
Seurat Unreal Plugin
Seurat Maya Script

Scene Capture

A scene capture consists of a set of RGBD images and a JSON manifest describing the capture. The capture is organized into view groups. A view group is a set of views, each consisting of a camera and the associated image data (RGB and depth). The views of a view group must have disjoint camera frusta. A common setup is to render 32 cube maps (the view groups) from random positions inside the headbox.

The images can be generated with any offline or real-time rendering engine, but they must have the following properties. Each pixel must correspond to the color and depth value of exactly one camera ray. This means that antialiasing needs to be turned off and features such as depth of field and motion blur have to be disabled. The same applies to most screen space effects, e.g. bloom and tone mapping.

Command line parameters

input_path [default=“”] [required] : Path to the input manifest.json file.

output_path [default=“”] [required] : Base path to all output artifacts. Filename extensions will be added automatically. E.g. if output_path is foo, the pipeline will produce foo.obj and foo.png.

cache_path [default=“”] : Directory for all cache artifacts. If empty, no cache will be used. Otherwise, results from the geometry stage will be cached in the specified directory. On subsequent runs, the geometry is loaded from this cache. This is useful for processing multiple texture channels for the same geometry or for iterating on lighting and shading.

single_face [default=””] : If not empty, process only the specified face of a cube around the headbox center. Must be one of 'front', 'back', 'left', 'right', 'bottom', 'top'. This is useful for fast previews where a full 360 degree scene is not required.

triangle_count [default=72000] : The maximum number of triangles to generate.

overdraw_factor [default=3.0] : The target amount of overdraw. Seurat bounds the average overdraw over a full 360 view. E.g. if overdraw_factor is set to 3, and the Seurat output is rendered into a cube map, then every pixel of that cube map will be covered by three quads on average. Seurat can allocate more quads in areas with high-depth complexity (e.g. trees) and less in simpler areas (e.g. solid wall). As a consequence, the maximum overdraw for any particular view direction is not guaranteed to be bounded (we are working on this).

peak_overdraw_factor [default=999.0] : The peak overdraw in any view direction. The default value effectively disables this features. Set to a small value (e.g. 5 or 6) to turn it on if overdraw is an issue.

gamma [default=1.0] : Gamma-correction exponent.

specular_filter_size [default=0.05] : The size of the filter used to 'bake' specular highlights. Smaller values bake sharper reflections. Larger values blur these out, yielding a more diffuse-looking representation. Use “+Infinity” for baking gbuffer values such as normal maps.

premultiply_alpha [default=true] : Determines whether output textures use premultiplied alpha.

ray_footprint [default=0.01] : The 'footprint' of a sample, along its depth. Larger values help fill & inpaint possible seams in the final geometry.

pixels_per_degree [default=13.0] : Resolution of the target display in pixels per degree. This parameter is used to determine texture size. It should be set to the resolution of the target HMD. Note that Seurat may automatically lower the resolution to not exceed the max_texture_size.

texture_width [default=4096] : The target width of the output texture. If necessary, the resolution in pixels_per_degree is reduced automatically to fit the result into an atlas of this size.

texture_height [default=4096] : The target height of the output texture. If necessary, the resolution in pixels_per_degree is reduced automatically to fit the result into an atlas of this size.

texture_alignment [default=4] : Alignment constraint (in pixels) on individual texture tiles in the atlas. Adjust this value as needed to match the alignment requirements of external block-based texture compression methods.

content_adaptive_resolution [default=false] : Determines whether to adapt local texture resolution based on texture content.

skybox_radius [default=200.0] : Half the side-length of the origin-centered skybox to clamp distant geometry. 0.0 indicates no skybox clamping should be performed.

fast_preview [default=false] : Determines whether to prefer speed over quality.

report_progress [default=true] : Print progress updates to stdout.

z_buffer [default=false] : When integrating Seurat output into an existing rendering pipeline, there are two main options for rendering its geometry: Render Seurat output with alpha blending, without writing to a z-buffer. Render Seurat output with alpha-to-coverage (a.k.a. “alpha to mask”) with z-buffer writes enabled. This flag indicates which rendering mode will be used, and the output will be optimized for rendering with that method.

projective_texture_mapping [default=false] : Enables projective texture mapping. Otherwise object space texture mapping is used. Projective texture mapping significantly reduces texture distortion on grazing angle quads.

separate_opaque [default=false] : Determines whether separate meshes and texture atlases will be output for opaque and translucent parts of the scene.

alpha_threshold [default=0.95] : Defines the threshold for deciding whether a texture is opaque or translucent. A value between 0.0 and 1.0 is expected.

pixel_filter [default="gaussian"] : Pixel filter for texture generation. Must be one of 'box' (unit box filter), 'bspline' (cubic B-Spline), 'gaussian' (truncated Gaussian with radius = 1.5 and sigma = 0.3).

JSON Manifest

Coordinate Spaces

All coordinate spaces are following the OpenGL convention of left-handed coordinate systems and cameras looking down the negative Z-axis. The naming convention for matrices is foo_from_bar_matrix for a matrix that transforms from bar-space to foo-space. E.g. the world_from_eye_matrix transforms points or vectors in eye-space into world-space.

Depth Encoding

Seurat currently support three depth encodings: WINDOW_Z, EYE_Z and RAY_DEPTH. Other encodings can be added as needed.

WINDOW_Z : Depths are the window-space Z coordinate (Z/W, as in Z buffer from GL) in the range [0.0, 1.0].

EYE_Z : Depths are the negated eye-space Z coordinate in the range [0, inf).

RAY_DEPTH : Depths are distances along a normalized ray (unit length direction vector) through a pixel center. In other words, this is the distance between the point and the origin in eye-space.

Spec

Capture := {
  Point3f headbox_center;  // optional
  ViewGroup view_groups[];
}
ViewGroup := {
  View views[];
}
View := {
  ProjectiveCamera projective_camera;
  DepthImageFile depth_image_file;
}
ProjectiveCamera := {
  int image_width;
  int image_height;
  Matrix4f clip_from_eye_matrix;
  Matrix4f world_from_eye_matrix;
  String depth_type := “WINDOW_Z”, “EYE_Z”, “RAY_DEPTH”;
}
DepthImageFile := {
  Image4File color;
  Image1File depth;
}
Image4File := {
  String path;
  String channel_0;
  String channel_1;
  String channel_2;
  String channel_alpha;
}
Image1File := {
  String path;
  String channel_0;
}
Point3f := double[3]
Matrix4f := double[16]

Description

The root node is a Capture object.

Matrices are in row-major order.

Channel names in Image4File and Image1File can be arbitrary strings (for OpenEXR input) or the following reserved channel names:

CONSTANT_ZERO : Fill channel with 0.0. Supported for all file formats and image types.

CONSTANT_ONE : Fill channel with 1.0. Supported for all file formats and image types.

R, G, B, A : Use hard-coded color channel from e.g. a PNG file. For OpenEXR input, these names are treated like any other string.

The color and depth images may reference the same file and load from different channels.

The alpha channel is currently only used to mask out pixels if the value is zero. All other values are treated as opaque.

File paths can either be relative to the manifest file, or absolute paths.

Image file formats are automatically detected. Seurat currently supports OpenEXR and PNG.

All views in a view group must have disjoint view frusta. The six faces of a cube map, for example, meet this criterion.

If headbox_center is specified, then all camera world-from-eye matrices are transformed to be relative to this location. This is useful if cameras are specified in world-space.

Example JSON

{
  "headbox_center": [0.0, 1.0, 2.0],
  "view_groups" : [
    {
      "views" : [
        {
          "projective_camera" : {
            "image_width" : 1024,
            "image_height" : 1024,
            "clip_from_eye_matrix" : [
              1.0, 0.0, 0.0, 0.0,
              0.0, 1.0, 0.0, 0.0,
              0.0, 0.0, 1.0, 0.0,
              0.0, 0.0, 0.0, 1.0
            ],
            "world_from_eye_matrix" : [
              1.0, 0.0, 0.0, 1.0,
              0.0, 1.0, 0.0, 2.0,
              0.0, 0.0, 1.0, 3.0,
              0.0, 0.0, 0.0, 1.0
            ],
            "depth_type" : "EYE_Z"
          },
          "depth_image_file" : {
            "color" : {
              "path" : "images/0001_front_color.exr",
              "channel_0" : "R",
              "channel_1" : "G",
              "channel_2" : "B",
              "channel_alpha" : "CONSTANT_ONE"
            },
            "depth" : {
              "path" : "images/0001_front_depth.exr",
              "channel_0" : "Z"
            }
          }
        }
      ]
    }
  ]
}

DISCLAIMER: This is not an officially supported Google product.

seurat's People

Contributors

manfred-ernst avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seurat's Issues

Why are there differences (scale, head box, cube size) between the Unity & Unreal Seurat Capture plugins?

I had first thought there was an issue with the Unreal Capture plugin due to the lack of any of mesh information being captured, but I have since changed my thoughts on this since I have a similar "Chatting with Vincent" model used in both engines but noticed far more data is captured in Unity than Unreal. I thought this was due to the headbox size (1 x 1 x 1 in Unity versus 100 x 100 x 100 in Unreal) but the actual size of the cube created did not seem to change even when I changed it in Unreal 10 x 10 x 10 (1 x 1 x 1 seems to hang the texture generation part of the procedure). However, if I change the pipeline 'skybox_radius=' to 400, I show 4x as much mesh detail since the box is now 4x larger . At 'skybox_radius=800' the detailed capture is further out, but it seems to me I am having to make too many changes in the pipeline. I also noticed the mesh is 100x larger than the mesh captured in Unity, but since too many things in my Unreal level are set up for VR scale and some assets are scaled independently within the level to match. Just making a wholesale scale change is not practical. However it is easy to just bring the .obj file over to Blender or other 3D design tool and change the scale there before importing back into an Engine. (Also if you want to use Blender to preview, make sure you select under Texture/Influence - Color& Alpha then 'Blend = Multiply'

blendertexturesetting

I guess knowing exactly what Google is trying to accomplish in the capture plugins would be very helpful since experimental iterations can take a great deal of time. However, if anyone else wants to contribute to this thread on what they have found in their own experimentation, that would be excellent.

In saying all this, it is great that the actual output is engine agnostic allowing allowing the capture process to used in any engine/3D rendering tool if someone wants to take the time build a plugin/tool. Again, a much more in-depth analysis of what is being accomplished will help us create a wider variety of Seurat tools.

Mesh corruption and texture lines

Hi
I've been experimenting with Seurat but keep getting mesh corruption like this
image
I'm also getting grey lines in the textures.
Here are my capture settings
image
i set the triangle count to 15000
Any ideas as to where I'm going wrong?
Thanks,
Mark

Geometry tearing during positional movement in Butterfly viewer

Hi, I have a scene that after I perform the capture in Unity and run the pipeline tool, the resulting ice file seems to have depth capture issues as part of the geometry is tearing (moving along with positional movement instead of staying fixed) should be updating to positional movement) as I move forward or left and right. It seems that the depth capture was incorrect. I tried with a 384 point sample and a 2x2x5 headbox to see if more samples over a larger volume solved the problem and it didn't. MSAA is turned off.

Any idea what could be causing this?

seurat-config

Seurat.exe obj processing time is in hours

Hello,
I have used the unity plugin to generate the RGBA images and depth files. Now I am using seurat.exe to convert it to obj. For me, the process is taking 9 to 10 hours. I have a nvidia 1060 gpu and 16gb ram. Is there a way to speed up the process.
Any help would be great!

Thanks in advance

Adding OSX support

It would be awesome to get this working with OSX. Thus far, I can get Ion to compile; however, bazel fails to compile.

bazel --bazelrc tools/bazel_linux.rc build -c opt seurat/pipeline:seurat                  master 7b20ac6
WARNING: /private/var/tmp/_bazel_hays/374f9938d7a1ef07b0593fc9df44f4c9/external/gflags/WORKSPACE:1: Workspac
e name in /private/var/tmp/_bazel_hays/374f9938d7a1ef07b0593fc9df44f4c9/external/gflags/WORKSPACE (@com_github_gflags_gflags) does not match the name given in the repository's definition (@gflags); this will cause abuild error in future versions
ERROR: /private/var/tmp/_bazel_hays/374f9938d7a1ef07b0593fc9df44f4c9/external/openexr/BUILD.bazel:358:15: Co
nfigurable attribute "defines" doesn't match this configuration (would a default condition help?).
Conditions checked:
 @openexr//:linux_x86_64
 @openexr//:windows_x86_64
ERROR: Analysis of target '//seurat/pipeline:seurat' failed; build aborted:

/private/var/tmp/_bazel_hays/374f9938d7a1ef07b0593fc9df44f4c9/external/openexr/BUILD.bazel:358:15: Configurable attribute "defines" doesn't match this configuration (would a default condition help?).
Conditions checked:
 @openexr//:linux_x86_64
 @openexr//:windows_x86_64
INFO: Elapsed time: 0.113s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)

I looked at Ion, and while it is encouraging that it compiles it does seem to have some quirks. For instance, it compiles too /seurat/third_party/ion/gyp-out/mac-ninja/opt, yet no config references the mac-ninja folder.

I tried tweaking some of the files to create a 'mac_x86_64' config; however, eventually, I just hit dependency issues deeper in the build process. It's almost like I want to force the compiler to think it's 'linux_x86_64'.

If anyone has an idea of how to force the build process to compile as Linux, or how to perform the needed tweaks to get the build running on OSX I would love to know!

Using seurat on web

Hi,
you you have any samples on how to render Seurat models on AFrame, or some other web technologies ?

com_github_gflags_gflags does not match the name

Hi, trying to install Seurat I got those errors:

(@com_github_gflags_gflags) does not match the name given in the repository's definition (@gflags)
missing input file '@ion//:gyp-out/win-ninja/opt_x64/tracinghelper.lib'.
ERROR: C:/seurat/seurat/pipeline/BUILD:78:1: //seurat/pipeline:seurat: missing input file '@ion//:gyp-out/win-ninja/opt_x64/tracinghelper.lib'.
C:/seurat/seurat/pipeline/BUILD:78:1 1 input file(s) do not exist.
missing input file '@ion//:gyp-out/win-ninja/opt_x64/iongfxutils.lib'.
ERROR: C:/seurat/seurat/pipeline/BUILD:78:1: //seurat/pipeline:seurat: missing input file '@ion//:gyp-out/win-ninja/opt_x64/iongfxutils.lib'.

Could it be because I don't use Visual Studio 2015 update 2?

Cannot run on Ubuntu: Terminate called after throwing an instance of 'std::bad_alloc'

Thanks for open source such great technology!

I already have input data (color, depth images and a manifest.json) generated in Unity.
Tried to run Seurat and got this error:

@ubuntu:~/Desktop/seurat/bazel-bin/seurat/pipeline$   ./seurat -input_path="input" -output_path="output"

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

This is my build result files: com_google_seurat.zip
(files were taken from folder ~/.cache/bazel/_bazel_userben/270545aa59c185fa13f54b7338bedae1/execroot/com_google_seurat/)

I'm using Ubuntu 18.04 on Virtual Machine, Bazel 0.13.0, Clang 3.9.1-19ubuntu1, build step looks good, no error message.
( My main OS is win 10 64bit, but I cannot build Ion on it so I have to using Ubuntu, already create an issue here google/ion#25 )

Speculars and reflections not changing

I have just written an exporter to Seurat for 3ds Max (https://github.com/superrune/3dsmaxSeuratExport), and have tested a couple scenes in the Butterfly viewer. I was under the impression that Seurat was storing light fields on the surface points, so that the files would store the reflections and specs as they change according to viewing angle. However, I am not seeing any of that in Butterfly. Can Seurat store the reflections as they change?

Any chance of posting binaries?

Like many Unity developers I rarely have to touch Windows build pipeline stuff and frankly the build instructions for Seuret look far from run of the mill.

It would be fantastic if you could publish Windows binaries to the 'releases' tab of this repo. In the meantime I'm going to try the Linux build under WSL as that looks more straightforward.

Python version in toolchain docs

It seems that for building ion:

c:\Seurat\third\_party\\ion>ion\build.bat -c opt_x64

a specific version of Python is required to be installed (2.7 I believe). This might be worth mentioning in the supported-toolchain documentation.

Logo design

Hi, are you interested in having a logo for your project?

I could send you a few ideas.

180 lightfield

Hi,
I would like to bake a lightfield only for a specific view i.e. just 180' and left the rest of a view transparent.
What's the best way to do that ?
Thanks !

build failed for butterfly

I'm getting the following when trying to build butterfly:

$ ~/bin/bazel --bazelrc tools/bazel_linux.rc build -c opt seurat/viewer/butterfly:butterfly
INFO: Analysed target //seurat/viewer/butterfly:butterfly (0 packages loaded).
INFO: Found 1 target...
ERROR: /home/flynnt/.cache/bazel/_bazel_flynnt/8734449d4e1d216509d7e0e9da30fc96/external/glfw/BUILD.bazel:33:1: undeclared inclusion(s) in rule '@glfw//:glfw_linux':
this rule is missing dependency declarations for the following files included by 'external/glfw/src/xkb_unicode.c':
'/usr/lib/clang/6.0.0/include/stddef.h'
'/usr/lib/clang/6.0.0/include/__stddef_max_align_t.h'
'/usr/lib/clang/6.0.0/include/stdint.h'
'/usr/lib/clang/6.0.0/include/stdarg.h'
Target //seurat/viewer/butterfly:butterfly failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 0.635s, Critical Path: 0.27s
INFO: 0 processes.
FAILED: Build did NOT complete successfully

I'm not familiar with Bazel. Some googling around suggested that something about c++ builtins should be added to CROSSTOOL (i think someone was having trouble compiling tensorflow). Looking for CROSSTOOL in seurat, revealed bazel_linux.rc which has a comment along the lines of:
"The 'correct' way to do this would be to use a custom CROSSTOOL, but this is much simpler". Do you have suggestions on how to go about fixing this?

Blender ?

Any possibility for blender plug-in to simplify scenes?

Provide examples

I came across this project via an article that has a small video clip from blade runner that isn't that useful, and a before/after of rouhgly 47 mil to 300k polys.

It'd be nice if there were some before/after images/video, especially showing off wireframe shaded(no texture, just wireframe on a solid colour) of what seurat is capable of. Comparisons to existing decimation/remesh methods would also be good. MeshMixer has some nice methods.

Compiled Seurat (exe for windows)

Hello,

I can' build Seurat. For building Seurat, I must build ion. But when building ion, I get an error while running build.bat -c opt_x64. I get following error:

ninja -t msvc -e environment.x64 -- cl.exe /nologo /showIncludes /FC @obj\third_party\icu\icu4c\source\common\ionicu.brkiter.obj.rsp /c ..\..\..\third_party\icu\icu4c\source\common\brkiter.cpp /Foobj\third_party\icu\icu4c\source\common\ionicu.brkiter.obj /Fdobj\ion\external\ionicu.cc.pdb
ninja: fatal: CreateProcess: File can not be found.

I have Python 2.7 installed. But on Windows 10, I can't build it.

Can you please upload a prebuilt EXE-File for Windows 10 please? I really want to test Seuret. It looks awesome.

Problem when running through the pipeline

Hello,
I used the Seurat pipeline encoder for Windows at https://github.com/ddiakopoulos/seurat/releases
I tried to capture the example scene at Seurat Unity Plugin Project.
But I had a error:

C:\seurat-compiled-msvc2017-x64>seurat-pipeline-msvc2017-x64.exe -input_path C:\seuratOutput\manifest.json -output_path C:\seuratResult
Reading input: [ ] 00:00:00
ERROR: Point is outside of the frustum used for binning. Possible causes for this problem: geometry inside the headbox, incorrect matrices, incorrect depth values, other errors in the generation process. (Showing only the first error)

Does anyone have a solution? I tried many option at Capture Headbox script. But It's not working.

Long Triangles along edges of meshes being projected into head space

I'm encountering a bug where long triangles are being generated along edges of objects but rather than projecting into the distance they jut inward into the head space area. The cubemaps were rendered with anti-aliasing turned off with the hope that it would prevent these artifacts. Hopefully these screenshots illustrate the problem well enough.

Without wireframe
screenshot 2018-05-16 21 36 11

With wireframe
screenshot 2018-05-16 21 35 58

Top down:
image

Using RAY_DEPTH gives Point is outside of the frustum used for binning ERROR

When trying to run the pipeline I am getting this error:

Point is outside of the frustum used for binning. Possible causes for this problem: geometry inside the headbox, incorrect matrices, incorrect depth values, other errors in the generation process

I am working with the depth type RAY_DEPTH with a near Clip plane of 1 and far Clip plane of 20
None of my Objects are too close or too far away.

In the ZIP is my current manifest.json and one of my depth.exr Images.

JSONandExr.zip

Create Scene with transparent background

When using seurat on a scene with a completly transparent skybox, all transparent areas are painted black in the final output textrue. Is it possible to create a scene that preserves the orignal background transparency?

ERROR: Headbox has zero extend along at least one dimension.

I've tried to replicate the maya script insinde 3dsmax but when I run the output file it triggers this error:

ERROR: Headbox has zero extend along at least one dimension. Arbitrarily setting binning near clipping plane to 0.1 units from headbox center.
...
ERROR: Point is outside of the frustrum used for binning. Possible causes for this problem: geometry inside the headbox, incorrect matrices, incorrect depth values, other errors in the generation process. (Showing only the first error)

Any idea on this?

I'm not sure if the content of my file could help


{
	"view_groups": [
 		{
 			"views": [
				{
					"projective_camera": {
						"image_width":512,
						"image_height":512,
						"clip_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-1.00167,
							-0.0200167,
							0.0,
							0.0,
							-1.0,
							0.0
						],
						"world_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-0.5,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0
						],
						"depth_type": "EYE_Z"
					},
					"depth_image_file": {
						"color": {
						"path": "cosita_color_1MM_f.png",
							"channel_0": "R",
							"channel_1": "G",
							"channel_2": "B",
							"channel_alpha": "CONSTANT_ONE"
						},
						"depth": {
							"path": "cosita_depth_1MM_f.exr",
							"channel_0": "R"
						}
					}
				},
				{
					"projective_camera": {
						"image_width":512,
						"image_height":512,
						"clip_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-1.00167,
							-0.0200167,
							0.0,
							0.0,
							-1.0,
							0.0
						],
						"world_from_eye_matrix": [
							-1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-0.5,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0
						],
						"depth_type": "EYE_Z"
					},
					"depth_image_file": {
						"color": {
						"path": "cosita_color_1MM_b.png",
							"channel_0": "R",
							"channel_1": "G",
							"channel_2": "B",
							"channel_alpha": "CONSTANT_ONE"
						},
						"depth": {
							"path": "cosita_depth_1MM_b.exr",
							"channel_0": "R"
						}
					}
				},
				{
					"projective_camera": {
						"image_width":512,
						"image_height":512,
						"clip_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-1.00167,
							-0.0200167,
							0.0,
							0.0,
							-1.0,
							0.0
						],
						"world_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							-0.5,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0
						],
						"depth_type": "EYE_Z"
					},
					"depth_image_file": {
						"color": {
						"path": "cosita_color_1MM_l.png",
							"channel_0": "R",
							"channel_1": "G",
							"channel_2": "B",
							"channel_alpha": "CONSTANT_ONE"
						},
						"depth": {
							"path": "cosita_depth_1MM_l.exr",
							"channel_0": "R"
						}
					}
				},
				{
					"projective_camera": {
						"image_width":512,
						"image_height":512,
						"clip_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-1.00167,
							-0.0200167,
							0.0,
							0.0,
							-1.0,
							0.0
						],
						"world_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							-0.5,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0
						],
						"depth_type": "EYE_Z"
					},
					"depth_image_file": {
						"color": {
						"path": "cosita_color_1MM_r.png",
							"channel_0": "R",
							"channel_1": "G",
							"channel_2": "B",
							"channel_alpha": "CONSTANT_ONE"
						},
						"depth": {
							"path": "cosita_depth_1MM_r.exr",
							"channel_0": "R"
						}
					}
				},
				{
					"projective_camera": {
						"image_width":512,
						"image_height":512,
						"clip_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-1.00167,
							-0.0200167,
							0.0,
							0.0,
							-1.0,
							0.0
						],
						"world_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							-0.5,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0
						],
						"depth_type": "EYE_Z"
					},
					"depth_image_file": {
						"color": {
						"path": "cosita_color_1MM_d.png",
							"channel_0": "R",
							"channel_1": "G",
							"channel_2": "B",
							"channel_alpha": "CONSTANT_ONE"
						},
						"depth": {
							"path": "cosita_depth_1MM_d.exr",
							"channel_0": "R"
						}
					}
				},
				{
					"projective_camera": {
						"image_width":512,
						"image_height":512,
						"clip_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							-1.00167,
							-0.0200167,
							0.0,
							0.0,
							-1.0,
							0.0
						],
						"world_from_eye_matrix": [
							1.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0,
							-0.5,
							0.0,
							0.0,
							0.0,
							0.0,
							1.0
						],
						"depth_type": "EYE_Z"
					},
					"depth_image_file": {
						"color": {
						"path": "cosita_color_1MM_u.png",
							"channel_0": "R",
							"channel_1": "G",
							"channel_2": "B",
							"channel_alpha": "CONSTANT_ONE"
						},
						"depth": {
							"path": "cosita_depth_1MM_u.exr",
							"channel_0": "R"
						}
					}
				}
			]
 		}
	]
}

Code not touched for 5 months

I guess this code base is dead ?
Sorry to be so blunt

The idea behind Seurat is great but dead code is a bad sign. Did another approach take off ?

How to use the Unreal plugin?

I've compiled it sucessfully and I see that there are some blueprints related to Seurat (cast to Seurat UE Plugin) but I have no idea about how it actually works.

Thank you in advance

Documentation for Offline Render Depth-Channel maps

Could someone clarify the needed specifications for depth maps when rendering with an offline renderer? I tried 3ds max scanline and vray so far. Using native 3ds max Z-Depth with scanline i get:

ERROR: Point is outside of the frustum used for binning. Possible causes for this problem: geometry inside the headbox, incorrect matrices, incorrect depth values, other errors in the generation process. (Showing only the first error)

With vray's clamped and inverted VrayZ-Depth i get:

DFATAL [seurat/base/projective_camera.cc:91] DCHECK failed: expression='"(depth) >= (0.0f)"' (-5.96046e-08 >= 0) and the first error as well

In both cases there are no objects in the view box. From the error message it's basically impossible to know what to change. Is there any more documentation on this subject and the error codes?

How to get optimized mesh models out

I'm trying to figure out how to actually get an optimized model out of the pipeline in order to use as a gltf. It looks like the output of running the seurat pipeline with a capture from unity creates an obj and a png texture, but its both incredibly large (10mb-obj, 15mb png) and still looks pretty janky. However, loading it up in the viewer using the .ice file looks pretty good. So i'm curious, what is the proper way to get a 3d model file out that is good quality and small? The instructions don't really tell you much other than some flags you can use.

Is the .obj the final generated artifact i should be using?

Also, what is in the .ice file that lets it render properly?

maya script gives an error with the InverseRadical calculation

When I try to launch the maya script Python throws an error:

RadicalIInverse function
Exception has occurred: OverflowError
int too large to convert to float

can this problem be a result of using python 3.6 instead of 2.7?

Can I use x**-1/2 and x**-1/3 to calculate the right values?

thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.