Giter VIP home page Giter VIP logo

semantic_human_texture_stitching's People

Contributors

aapatre avatar brejchajan avatar thmoa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

semantic_human_texture_stitching's Issues

How to save texture cooradinate

I have got the sample.obj and sample_texture.jpg. But the project also needs texture coordinates. Is there someone has solved this problem. How to save the texture coordinates in the obj file. This is the last step I need, and I still don't know how to fix it. Could you guys help me. Thank you so much!

render

iso==0.0.3 cannot import name 'Isomapper' from 'iso'

Step 1: make unwraps...
Traceback (most recent call last):
File "step1_make_unwraps.py", line 16, in
from tex.texture import TextureData
File "/home/Downloads/semantic_human_texture_stitching/tex/texture.py", line 11, in
from iso import Isomapper
ImportError: cannot import name 'Isomapper' from 'iso' (/home/anaconda3/envs/vibe-env/lib/python3.7/site-packages/iso/init.py)
Step 2: make priors...
step2_segm_vote_gmm.py:82: RuntimeWarning: divide by zero encountered in true_divide
unaries = np.ascontiguousarray((1 - voting / len(iso_files)) * 10)
step2_segm_vote_gmm.py:82: RuntimeWarning: invalid value encountered in true_divide
unaries = np.ascontiguousarray((1 - voting / len(iso_files)) * 10)
Traceback (most recent call last):
File "step2_segm_vote_gmm.py", line 146, in
main(args.unwrap_dir, args.segm_out_file, args.gmm_out_file)
File "step2_segm_vote_gmm.py", line 86, in main
edge_idx = pkl.load(open('assets/basicModel_edge_idx_1000.pkl', 'rb'))
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 80: ordinal not in range(128)
Step 3: stitch texture...
Traceback (most recent call last):
File "step3_stitch_texture.py", line 14, in
from tex.texture import Texture
File "/home/Downloads/semantic_human_texture_stitching/tex/texture.py", line 11, in
from iso import Isomapper
ImportError: cannot import name 'Isomapper' from 'iso' (/home/anaconda3/envs/vibe-env/lib/python3.7/site-packages/iso/init.py)

What's the correspondence between 3D 2D triangles?

Hi, Thiemo,
In your asset folder, there are numpy files containing 'f' and 'ft' fields, I know they are the indices of triangle faces in 3D and 2D, I assume they are one-to-one matching, and then get the correspondence of the vertices between 3D and 2D, but the result is wrong?
Could you explain the relationship between the 3D faces and 2D texture faces? How do they differ from each other?
Thanks!

GCO naming

Hi, thank you very much for the code!

I am experiencing error in step2_segm_vote_gmm.py:
File "step2_segm_vote_gmm.py", line 107, in main gc = gco.gco()

The problem is resolved by changing gco.gco() to gco.GCO().

Furthermore, on the next line, there is gc.createGeneralGraph(1000 ** 2, pairwise.shape[0], True), but should be gc.create_general_graph(1000 ** 2, pairwise.shape[0], True).

There are probably more places with the wrong naming of the GCO library. Shall I create a pull request with a fix?

I am on MacOS 10.15.7, homebrew python2.7, GCO version 3.0.2 (master, commit: 1203f5a7c0e458de7dc7191f7d5f89b9ce985e7a).

question about basicModel_seams.npy and basicModel_edge_idx_1000.pkl ?

Hi @thmoa, sorry for bothering you, I have a little question about the basicModel_seam.npy and basicModel_edge_idx_1000.pkl. I guessed they were index of vertices of seam in basicModel_seam.npy, but the maximum value is much far beyond the basicmodel vertices' number. Can you explain what are they in two files? Thanks.

Texturing solution?

@alakhag, as mentioned by you I followed the steps and now my Material file ("mymtl.mtl") now looks like this:

#Blender MTL File: 'None'
#Material Count: 1

newmtl MyMaterial
Ns 0
Ka 1.000000 1.000000 1.000000
Kd 0.400000 0.400000 0.400000
Ks 0.000000 0.000000 0.000000
d 1.000000
illum 2
map_Ka sample_texture.jpg

Then I downloaded the SMPL model (fbx), converted to OBJ and copied the vt and f vertices into the Octopus generated obj.

Before that, I removed the f co-ords from Octopus obj.

Now the obj when viewed looks like this: (see attachment)...is this correct?

output0001

Originally posted by @alakhag in #2 (comment)

Mismatched Texture coming from the New data

##Semantic Texture stitching is not giving the correct texure map when run with the new data taken from the camera by us .I have Read the Research paper clearly
The data has been given to the octopus and predicted the frame data.pkl
I have given this framedata.pkl , color images , semantic segmentations of new data to the texture stitching method

But getting the wrong results . Mismatching has been occuring at the faces badly and at the belly side of the body.at some parts the texture has been tilted and rotated.

question about basicModel_seam

Hey,sorry for bothering you,just a little question about the basicModel_seam.npy,because I would like to replace smpl with some other body model.
but I do not know the specific meaning of this npy files,I know that seam is used to split the mesh surface and make uv unwarp ,the shape of your seam is [1360,4],I know 1360 means there are 1360 edges that consist the seams,but why the shape[1] is 4?what is the specific meaning of each dimension ?
Thanks a lot for your response bro : )

Consultation on Texture Map Mask

I ran the test file and got good results. But now I want to use my own data to generate texture, so how can I generate texture map mask?

frame_data.pkl generation

I am using Octopus and in the last line of infer_single.py we get the pred which is then written out as obj. but, we are only writing faces and vertices.

a) is there something else like vt and other information that we need to write into the obj? shd we use some vt values from the basic model npy file for this step?

b) how to create the frame_data.pkl so that it matches the obj that comes out of Octopus.

both these parts is not very clear. otherwise, your codes for texture and Octopus are deployed and running for default data only.

frame_data.pkl

Hi @thmoa,

I was wondering how to generate the frame_data.pkl specifically this part:

"vertices": [list of per frame SMPL vertices in camera coordinates]

Are these the vertices from the .obj file? Also, is this dictionary entry a list within a list so each frame is basically an inner list of these .obj vertices?

Thanks

how to set camera_f while generating frame_data.pkl?

Hi @thmoa , many thanks for the code.

I am successfully able use octopus to generate obj and apply texture on it. But the output looks larger than actual. Is it due to wrong focal length value?

I used a OnePlus 5 mobile camera with rear focal length of 24mm for frames generation. Could you please tell me how to use this 24mm to set camera_f while generating frame_data.pkl?

Thanks in advance...

How do I texture an image into obj.file?

I use octopus in order to generate sample.obj , and use this project to generate sample_texture.jpg,
but obj file warning "This mesh does not have any texture coordinates." ,what solution can I get ? tks

Using images of a 3d model as input data

Hello, i have been trying to reporduce your results, but i cant seem to get a detailed enough texture.
I was wondering why are you using pictures of a 3d model as input data in your sample.
I was also wondering for the setting of captured video, how careful should i be about lighting conditions and such things.
Thanks

Strange error in demo: divide by zero encountered in divide

Hi, thanks for your work. I run the demo, however i encountered strange error, it seems divide by zero encountered in divide, but i cann't find where it happen.
The following is the specific error message:

python step1_make_unwraps.py data/sample/frame_data.pkl data/sample/frames data/sample/segmentations data/sample/unwraps
0%| | 0/8 [00:00<?, ?it/s]/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/opendr-0.77-py2.7-linux-x86_64.egg/opendr/renderer.py:185: RuntimeWarning: divide by zero encountered in divide
result = np.asarray(deepcopy(gl.getDepth()), np.float64)
/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/opendr-0.77-py2.7-linux-x86_64.egg/opendr/renderer.py:185: RuntimeWarning: invalid value encountered in divide
result = np.asarray(deepcopy(gl.getDepth()), np.float64)
Exception KeyError: KeyError(<weakref at 0x7f5abfa06158; to 'tqdm' at 0x7f5abfa27890>,) in <bound method tqdm.del of 0%| | 0/8 [00:00<?, ?it/s]> ignored
Traceback (most recent call last):
File "step1_make_unwraps.py", line 89, in
main(args.data_file, args.frame_dir, args.segm_dir, args.out)
File "step1_make_unwraps.py", line 57, in main
vis, iso, iso_segm = texture.get_data(frame, camera, mask, segm)
File "/ssd2/vis/v_wangzhi04/semantic_human_texture_stitching/tex/texture.py", line 29, in get_data
f_vis = self.visibility.face_visibility(camera, silh)
File "/ssd2/vis/v_wangzhi04/semantic_human_texture_stitching/util/visibility.py", line 48, in face_visibility
v_vis = self.vertex_visibility(camera, mask)
File "/ssd2/vis/v_wangzhi04/semantic_human_texture_stitching/util/visibility.py", line 32, in vertex_visibility
depth = self.rn_d.r
File "/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/chumpy-0.68-py2.7.egg/chumpy/ch.py", line 596, in r
self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
File "/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/opendr-0.77-py2.7-linux-x86_64.egg/opendr/renderer.py", line 100, in compute_r
return self.depth_image.reshape((self.frustum['height'], self.frustum['width']))
File "/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/chumpy-0.68-py2.7.egg/chumpy/ch.py", line 1213, in with_caching
sdf['value'] = func(self, *args, **kwargs)
File "/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/opendr-0.77-py2.7-linux-x86_64.egg/opendr/renderer.py", line 185, in depth_image
result = np.asarray(deepcopy(gl.getDepth()), np.float64)
File "contexts/ctx_base.pyx", line 24, in contexts.ctx_mesa.mc.with_make_current
File "contexts/ctx_base.pyx", line 257, in contexts.ctx_mesa.OsContextBase.getDepth
File "contexts/ctx_base.pyx", line 24, in contexts.ctx_mesa.mc.with_make_current
File "contexts/ctx_base.pyx", line 313, in contexts.ctx_mesa.OsContextBase.getDepthCloud
File "/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 528, in inv
ainv = _umath_linalg.inv(a, signature=signature, extobj=extobj)
File "/ssd2/vis/v_wangzhi04/anaconda2/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 89, in _raise_linalgerror_singular
raise LinAlgError("Singular matrix")
numpy.linalg.linalg.LinAlgError: Singular matrix

AttributeError: 'GaussianMixture' object has no attribute 'means_'

@thmoa
Hi,when I run the code "bash run_sample.sh",sometimes it run to the end without any error,but sometimes it returns errors as follows:

Traceback (most recent call last):
  File "step3_stitch_texture.py", line 83, in <module>
    main(args.unwrap_dir, args.segm_template, args.gmm, args.out_file, args.iter)
  File "step3_stitch_texture.py", line 48, in main
    texture_agg, labels = texture.add_iso(isos[rl], visibilities[rl], rl, inpaint=i == (num_iter-1))
  File "/home/wuyibin/MultiGarmentNetwork-master_python3/semantic_human_texture_stitching-master/tex/texture.py", line 112, in add_iso
    diff = data.reshape(-1, 1, 3) - self.gmms[color_id].means_
AttributeError: 'GaussianMixture' object has no attribute 'means_'

Any advice about resolving this error will be appreciated.

Getting an error while running run_sample.sh

I got your Octopus project running correctly earlier and I was trying to get the texture mapping also done. I got all the dependencies installed correctly.
However when I was running the run_sample.sh, I got this error: Can you please tell what could be the reason"?
Step 1: make unwraps...
100%|██████████| 8/8 [00:22<00:00, 2.86s/it]
Step 2: make priors...
extract from 0000_unwrap.jpg...
extract from 0001_unwrap.jpg...
extract from 0002_unwrap.jpg...
extract from 0003_unwrap.jpg...
extract from 0004_unwrap.jpg...
extract from 0005_unwrap.jpg...
extract from 0006_unwrap.jpg...
extract from 0007_unwrap.jpg...
GMM fit Shoes...
GMM fit Torso-skin...
GMM fit Arms...
GMM fit Face...
GMM fit Hair...
GMM fit UpperClothes...
GMM fit Pants...
Traceback (most recent call last):
File "step2_segm_vote_gmm.py", line 146, in
main(args.unwrap_dir, args.segm_out_file, args.gmm_out_file)
File "step2_segm_vote_gmm.py", line 106, in main
gc = gco.gco()
AttributeError: 'module' object has no attribute 'gco'
Step 3: stitch texture...
Traceback (most recent call last):
File "step3_stitch_texture.py", line 83, in
main(args.unwrap_dir, args.segm_template, args.gmm, args.out_file, args.iter)
File "step3_stitch_texture.py", line 24, in main
segm_template = read_segmentation(segm_template_file)
File "/home/drive/semantic_human_texture_stitching/util/labels.py", line 77, in read_segmentation
segm = cv2.imread(file)[:, :, ::-1]
TypeError: 'NoneType' object has no attribute 'getitem'

Data Preparation

Hi @thmoa ,
I could successfully created the frames data and their corresponding segmentations data using PGN segmentation. For the second step, I executed the octopus(run_demo.sh). But it's generating only 3D model

sample.obj

. Then I figured out to save vertices i.e

pred['vertices'][0]

in infer_single.py, But again it is producing a single vertices.pkl file for overall frames.
How can i save vertices for every frames?
Please help as it will be helpful to create frame_data.py
Thanks.

How texture image apply to 3D model

Hello guys! I got the 3D human model by running the octopus project. Then I run the semantic_human_texture_stitching project, it generated the texture image and segm image by . they are shown in order in the below:

model
texture
seg

But when I pasted texture_image(second image) to 3D model(first image) in blender and unity, this model is always black without clothing texture. I hope friends who know what happend could help me, thank you in advance!

render

ImportError: No module named gco

I'm using python2 in conda and getting the following error after running bash run_sample.sh

Step 1: make unwraps...
Traceback (most recent call last):
  File "step1_make_unwraps.py", line 16, in <module>
    from tex.texture import TextureData
  File "/media/ingnious/Acer/Users/nachi/repos/semantic_human_texture_stitching/tex/texture.py", line 10, in <module>
    from stitch.texels_fusion import Stitcher
  File "/media/ingnious/Acer/Users/nachi/repos/semantic_human_texture_stitching/stitch/texels_fusion.py", line 4, in <module>
    import gco
ImportError: No module named gco
Step 2: make priors...
Traceback (most recent call last):
  File "step2_segm_vote_gmm.py", line 6, in <module>
    import gco
ImportError: No module named gco
Step 3: stitch texture...
Traceback (most recent call last):
  File "step3_stitch_texture.py", line 14, in <module>
    from tex.texture import Texture
  File "/media/ingnious/Acer/Users/nachi/repos/semantic_human_texture_stitching/tex/texture.py", line 10, in <module>
    from stitch.texels_fusion import Stitcher
  File "/media/ingnious/Acer/Users/nachi/repos/semantic_human_texture_stitching/stitch/texels_fusion.py", line 4, in <module>
    import gco
ImportError: No module named gco

A implement question in your paper

Hi
when I try to implement your paper"detail video avatar",a liittle question confuse me. In the subdivided smpl body model,you constrcut a finer model Mf(β; θ; D; s) consist of 110210 vertics and 220416 faces. I noticed that shape blend shape β,pose blend shape θ,blend skinning function W seems to be out of dated,even the joint regressor J.I wonder that how you guys update the original smpl parameter β; θ; W;J.Looking forward to your responding.

instructions for people-snapshot dataset

Hi,

Thank you for making the code public. I'm trying to run the code for people-snapshot dataset.
In step1, I get an error that SMPL vertices are not visible.

Do I need to run octopus on the people-snapshot subject data to get the per-frame vertices?

Could you also specify what configuration did you use to run octopus? I have tried tensorflow-gpu1.13/1.15/2.9 (few more 2.xx versions) and something always seems to break. If you can provide versions of the dependencies it'll be super helpful. Thanks.

Change Texture Map mask

I want to change the mask provided with this repository.
This repository provides this mask:
old_mask

However, I want to use this mask, extracted from the People's Snapshot Dataset:
tex_mask_1000
How can I do that?

a problem about generating texture

 First of all, Thank for your code. I am trying to run your code on my data and I follow your steps, but why it looks like this?

 I think the input image is not aligned with the vertices produced by Octopus, but how to solve this problem? Thanks again~

sample_texture

videoavatar

Can this algorithm be used in videoavatar code?

edge seams artificats in textured models

I ran the octopus pipeline which uses this repo for generating texture. The textured models have artificats at edge seams, was this done deliberately to separate out different parts?

Another problem about generating texture map

    I have optimized octopus enough times, and get correct mesh to obtain plausible partial texture map. Then I merge all of these texture map and get results as below. How should I fix this problem? Thanks~

sample_texture

Isomapping

Thank you for this work.
I am just a little bit curious, can you explain or refer me to a paper that explain the isomaping i.e how you are able to create the texture vertices coordinates in the assets (basicModel_vt.npy). In other words, how did you do the conformal mapping to get the texture coordinate. I am interested mostly in the human face region of your work. It seems like you didn't use cylindrical mapping used by most face modeling which is logic since this is full human model.

I will be very grateful if you can give me a bit information or mathematical mapping on how you generate the texture mask for the model.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.