Giter VIP home page Giter VIP logo

unipad's People

Contributors

nightmare-n avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

seabird-go

unipad's Issues

About rendering

Thank you for your work. I would like to ask how sampling 512 rays per image view achieves such a high rendering resolution in the final render image(figure3 in your paper). For a high-definition image of 1600 900, 512 seems like a very small number.

smooth_sampler ops build error

Hi, thank you for sharing this excellent work! When i try to install this project using python setup.py develop --user, i met this compilation error
截屏2023-12-19 上午10 33 52
I wonder how can i solve this errors, hope for your reply!!

A few questions on details

Thank you for releasing this amazing work! I just had a couple questions on some of the camera-only outdoor details @Nightmare-n

  1. Are the same 6 images used for generating the 3D voxel grid and rendering (as is mentioned to be done for ScanNet in PonderV2)?
  2. Were the used ConvNeXt(V1?) backbones trained from scratch or with IN1k?
  3. Was any data augmentation used for the 2D training stage, besides regular MAE masking?
  4. When using the proposed depth-aware sampling, are the same 512 rays, sampled from the pixels with available LiDAR points, used for both color & depth rendering?
  5. In Table 8g, there appear to be trainable weights associated with the view transformation stage. Does the view transformation generally follow UVTR(?) with multi-scale sampling and depth weighting? Or perhaps single-scale?
  6. In PonderV2, supplementary is mentioned. Would it be possible for this to be made public?
  7. What (total) batch size was used?

Apologies for the list of questions, but I'm really interested in the work. Again, thank you so much in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.