Giter VIP home page Giter VIP logo

Comments (8)

petergu684 avatar petergu684 commented on August 20, 2024

I don't have a comprehensive answer for you but the short answer is, the sensor takes a while to open and close (say 1-2 seconds, but I never tested it), so if you call your uvToWorld in each frame, there is obviously a problem there. You may want to let the sensor keep running in the background and figure out someway to query the uv by another function, for example via shared variable to let the thread know which uv you want to back-project and then save the result somewhere you can access from main thread. Your uv is basically the i and j in LongDepthSensorLoop. Hope that helps.

from hololens2-researchmode-unity.

Tolerm avatar Tolerm commented on August 20, 2024

@petergu684 Thanks for your answer!
I have set a boolean to control OpenStream() just run once and get the buffer every call, and it works! Although the result of the coordinate transformation now has obvious errors, I will manage to solve it next!
( maybe it's caused by that the input point comes from the MixedRealityCapture video stream via WIndowsDevicePortal is 1280x720 , and the LongThrow mode's resolution is 320x288, and my resolution transformation for this is wrong)
Thanks again for this repo and your reply!

from hololens2-researchmode-unity.

EliasMosco avatar EliasMosco commented on August 20, 2024

@Tolerm how did you get this working? when i try it tells me i overloded "std::invoke'. Was there something special you did with your header file?

from hololens2-researchmode-unity.

Tolerm avatar Tolerm commented on August 20, 2024

@EliasMosco Are you talking the OpenStream() ? I just used a boolean , in HL2ResearchMode.h,

struct HL2ResearchMode : HL2ResearchModeT<HL2ResearchMode>
 {
   ......
  private:
  ...
  std::atomic_bool m_isDepthSensorStreamOpen = true;   // this boolean
  static void myFunc(HL2ResearchMode* pHL2ResearchMode);
  ...
}

and inHL2ResearchMode.cpp, it's like below

void HL2ResearchMode::MyFunc(HL2ResearchMode* pHL2ResearchMode)
{
  if(pHL2ResearchMode->m_isDepthSensorStreamOpen)
  {
    pHL2ResearchMode->m_longDepthSensor->OpenStream();
    pHL2ResearchMode->m_isDepthSensorStreamOpen = false;
  }
  ...
}

it's very simple, you can change it according to your demands. If you are looking for a way to control multi-sensors stream control, the Mirosoft official docs ResearchMode-ApiDoc.pdf and the repo HoloLens2ForCV may help you , they can both be found in https://github.com/microsoft/HoloLens2ForCV

from hololens2-researchmode-unity.

LydiaYounsi avatar LydiaYounsi commented on August 20, 2024

Hello @Tolerm, did you manage to get it to work after all? If so, do you happen to have a repository? I'm trying to achieve the same thing!

from hololens2-researchmode-unity.

Tolerm avatar Tolerm commented on August 20, 2024

Hello @Tolerm, did you manage to get it to work after all? If so, do you happen to have a repository? I'm trying to achieve the same thing!

Well, I'm working in the transformation of PV Camera and Depth Camera, but I'm not sure if the method I'm using now is correct, in principle, same cameras and same mode according the ResearchMode ApiDoc, so the transformation in HoloLens2 should be the same as HoloLen1, and the works in HoloLen1 should be valuable. Finally, I mainly referred to the following:

  1. petergu684 's this repo;
  2. HoloLens2ForCV's sample, StreamRecorder https://github.com/microsoft/HoloLens2ForCV/tree/main/Samples/StreamRecorder#stream-recorder-sample
  3. LisaVelten's code, https://github.com/microsoft/HoloLensForCV/issues/119#issuecomment-553098740

if your work is the same as mine, get a (u,v) in PV Camera and transform it to 3D point in Unity's coord system with the depth of this image point, then my method now like :

  1. get a (u,v) in PV Camera coordinate system;
  2. (u,v) multiply by the inversed intrinsics of PV camera, the intrinsics got from camera calibration , and get a (x,y,z) coordinate;
  3. set a vector as (x,y,-1,1),and this vector multiply by the PVToWorld matrix to get a world coodinate system point (X,Y,Z,W), the PVToWorld matrix can be found in the StreamRecorder project and understand it there
  4. now transform the (X,Y,Z,W) to the (u',v') in the Depth Camera according pertergu684's method , rigid transformation and perspective projection in Camera Model;
  5. get the depth of (u',v') according petergu684's method, and then get the world point in Unity's coordinate system (or other coordinate system)

if your work is to transform a (u,v) point in Depth Camera image coord system -> (X,Y,Z,W) in a given world coord system, you may do it like this:

  1. use the method MapImagePointToCameraUnitPlane(uv,xy)
  2. give a world coord system Windows::Perception::Spatial::SpatialCoordinateSystem (please refer to petergu684's repo) , and get the extrinsic matrix
  3. (x,y,1) * the extrinsic matrix, it's the reverse process of petergu684's work

from hololens2-researchmode-unity.

LydiaYounsi avatar LydiaYounsi commented on August 20, 2024

@Tolerm Thanks a lot for your answer! Could you provide more details about how you computed the inrtrinsics of the PV camera?

from hololens2-researchmode-unity.

Tolerm avatar Tolerm commented on August 20, 2024

@Tolerm Thanks a lot for your answer! Could you provide more details about how you computed the inrtrinsics of the PV camera?

I first tried the UndistortedProjectionTransform here, but the matrix I got was like this :
[ 992.376 0 0 0;
0 -993.519 0 0;
0 0 1 0;
628.272 378.181 0 1] .
I don't understand the meaning of minus sign '-' before 993.519 , and my PV camera always runs under one setting (1280x720, 30fps) , so I think the intrinsics of my PV camera won't change, I finally used the APP 'Camera Calibrator' of Matlab with a calibration checkerboard and get the intrinsic parameters.
BTW, run the Stream Recorder app will get some files of PV camera and Depth camera, and there will be extrinsics of PV camera, extrinsics of Long Throw mode Depth Camera ( this extrinsic matrix may be relative to the Rig Coordinate system)

from hololens2-researchmode-unity.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.