Comments (6)
Hey Carla,
If I remember it correctly you have to inverse the matrix https://docs.unity3d.com/ScriptReference/Matrix4x4-inverse.html.
Does that do anything useful?
from livescan3d.
Hi Carla,
sorry, I missed this message earlier. I'm just going through the code trying to remember what format the calibration is stored in the .txt file. The relevant code is in lines 710 - 714 in liveScanClient.cpp and in utils.cpp.
It appears that a point is transformed from local coordinates to world coordinate as follows:
x' = R(x+t),
The 4x4 matrix you use in Unity assumes a transform of the following form: x'=Rx + t. Thus, I believe what you need to do is multiply the translation you get from the .txt file by the rotation matrix you got from the same file. The same operation is also done in lines 138-145 of KinectSocket.cs.
Once you do that the position of the camera in Unity should match what you get in the app.
Marek
from livescan3d.
Hi Marek!
Thank you for your response. I tried to compute the transformation as you suggested (x' = R (x+t)), but I haven't got the correct reconstruction.
I attach the new code and the result obtained in Unity so you can see it more clearly and maybe spot some errors.
Code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class TestCalib : MonoBehaviour
{
public Vector3 position;
public Vector3 rotation;
public int index = 0;
// Start is called before the first frame update
Vector3 translation = new Vector3(0.0f, 0.0f, 0.0f);
float[][] rotationMatrixCV =
{
new float[3],
new float[3],
new float[3],
};
private void Update()
{
//transform.rotation= Quaternion.Euler(cam1.TransformDirection(rotation));
//transform.position = cam1.TransformPoint(position);
}
void Start()
{ // rotationMatrixCV = your 3x3 rotation matrix; translation = your translation vector
// set the rotation matrix and translation vector of the camera
if (index == 0)
{
rotationMatrixCV[0] = new float[3] { 0.25563f, 0.25744f, -1.51961f };
rotationMatrixCV[1] = new float[3] { -0.192701f, -0.819072f, -0.540359f };
rotationMatrixCV[2] = new float[3] { -0.981055f, 0.172012f, 0.0891256f };
translation = new Vector3(0.0199481f, 0.547296f, -0.836701f);
}
else if (index == 1)
{
rotationMatrixCV[0] = new float[3] { 0.15183f, -0.06187f, -1.14252f };
rotationMatrixCV[1] = new float[3] { -0.0408712f, -0.954359f, 0.295853f };
rotationMatrixCV[2] = new float[3] { -0.997299f, 0.0208791f, -0.0704222f };
translation = new Vector3(0.0610309f, -0.297932f, -0.952634f);
}
var rotationMatrix = new Matrix4x4();
for (int i = 0; i < 3; i++)
{
for (int j = 0; j < 3; j++)
{
rotationMatrix[i, j] = rotationMatrixCV[i][j];
}
}
rotationMatrix[3, 3] = 1f;
Vector4 translation_vector = rotationMatrix*new Vector4(translation[0], translation[1],translation[2], 1.0f);
var localToWorldMatrix = Matrix4x4.Translate(translation_vector) * rotationMatrix;
Vector3 position;
position.x = localToWorldMatrix.m03;
position.y = localToWorldMatrix.m13;
position.z = localToWorldMatrix.m23;
transform.position = position;
Vector3 forward;
forward.x = localToWorldMatrix.m02;
forward.y = localToWorldMatrix.m12;
forward.z = localToWorldMatrix.m22;
Vector3 upwards;
upwards.x = localToWorldMatrix.m01;
upwards.y = localToWorldMatrix.m11;
upwards.z = localToWorldMatrix.m21;
transform.rotation = Quaternion.LookRotation(forward, upwards);
}
}
Result:
The idea is to place the two cameras in Unity and then project the point clouds captured by each of them. If the cameras were well placed, the two point clouds should merge and reconstruct the whole scene. But here's the result of placing the two cameras using the previous script. Clearly there's something wrong.
Thank you again!
from livescan3d.
Hey I'm just curious, you took the pointclouds from the .ply files and then imported them into Unity right?
Because the pointclouds already have the transformations applied onto them, so you wouldn't need to change their transforms at all. For example if you load all of the unmerged .ply frames into Meshlab, they should already appear as "stitched" pointcloud.
from livescan3d.
Hi Christopher! No, I'm not taking the .ply files, I have the Kinects connected to Unity and getting the pointclouds in real time from there.
from livescan3d.
Ah I see!
Maybe you can take a look at this project/script here? As far as I know it works and imports the camera extrinsics from an modified version of Livescan into Unity. You don't need the modified version of Livescan though, it only saves the camera extrinsics into a slightly different format, but the values are the same as in the .calib file.
from livescan3d.
Related Issues (20)
- LiveScanClient build from source code failed for Azure Kinect. HOT 2
- Azure data corruption, scan merging, color problems, player crashing HOT 4
- Use Hololens 2 sensor data as input HOT 2
- Using AzureKinect, PointCloud is upside down HOT 3
- .bin file in sync while ply files out of sync for two camera recordings HOT 3
- the problem of settings HOT 9
- hi~Thank for share your project , I have a problem is that this project can run on Visual studio 2022 ? HOT 4
- Application's validity HOT 1
- "Kinect.h" no such file or directory
- color mismatch HOT 10
- Problem with Caliberation HOT 51
- Save data in .ply format HOT 6
- How to compile .exe for Azure Kinect? HOT 2
- Problems about showing .ply in LiveScanPlayer.exe HOT 9
- Show nothing when I click 'show live' in the server.exe HOT 5
- Could we save the depth map image at the same time? HOT 4
- Xbox 360 Kinect or Xbox One only? HOT 1
- adding audio HOT 3
- Can LiveScan3D connect two KinectV2 cameras to output depth and RGB images HOT 20
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from livescan3d.