Giter VIP home page Giter VIP logo

holy-edge's People

Contributors

dependabot[bot] avatar sandhawalia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

holy-edge's Issues

cannot clone pre-trained model file

Hi, it seems that the hed-model-5000.meta cannot be cloned because of data quota exceeded. Could you provide a new link for this file? Thanks!
The current content is just a placeholder:

version https://git-lfs.github.com/spec/v1 oid sha256:71ef9f6fb10c25654e3f16708da7efc55bdbcc02cae75b84134a7ce051f728f9 size 60853279

This backport is for Python 2.7 only

I'm trying to install the requirements, I'm facing the following problem while installing functools32:

complete output from command python setup.py egg_info:
This backport is for Python 2.7 only
I tried to download other version than the one existed in requirements.txt, but I'm getting more problems with alot of packages. Can you please provide me with your environment details?
For me, I'm using Windows 10 64x, conda, python 3.7

Dimension Mismatch

Console Error Log

2018-05-06 17:38:44.599331: W tensorflow/core/framework/op_kernel.cc:1152] Invalid argument: ConcatOp : Dimensions of inputs should match: shape[0] = [1,400,600,1] vs. shape[4] = [1,400,608,1]
[[Node: concat = ConcatV2[N=5, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](side_1/conv2d_transpose, side_2/conv2d_transpose, side_3/conv2d_transpose, side_4/conv2d_transpose, side_5/conv2d_transpose, concat/axis)]]
Traceback (most recent call last):
File "run-hed.py", line 64, in
main(args)
File "run-hed.py", line 44, in main
tester.run(session)
File "/home/mnadeem/research/holy-edge/hed/test.py", line 68, in run
edgemap = session.run(self.model.predictions, feed_dict={self.model.images: [im]})
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,400,600,1] vs. shape[4] = [1,400,608,1]
[[Node: concat = ConcatV2[N=5, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](side_1/conv2d_transpose, side_2/conv2d_transpose, side_3/conv2d_transpose, side_4/conv2d_transpose, side_5/conv2d_transpose, concat/axis)]]

Caused by op u'concat', defined at:
File "run-hed.py", line 64, in
main(args)
File "run-hed.py", line 43, in main
tester.setup(session)
File "/home/mnadeem/research/holy-edge/hed/test.py", line 37, in setup
self.model = Vgg16(self.cfgs, run='testing')
File "/home/mnadeem/research/holy-edge/hed/models/vgg16.py", line 30, in init
self.define_model()
File "/home/mnadeem/research/holy-edge/hed/models/vgg16.py", line 81, in define_model
self.fuse = self.conv_layer(tf.concat(self.side_outputs, axis=3),
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1034, in concat
name=name)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 519, in _concat_v2
name=name)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/mnadeem/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1228, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,400,600,1] vs. shape[4] = [1,400,608,1]
[[Node: concat = ConcatV2[N=5, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](side_1/conv2d_transpose, side_2/conv2d_transpose, side_3/conv2d_transpose, side_4/conv2d_transpose, side_5/conv2d_transpose, concat/axis)]]

Config:

training:
dir: HED-BSDS
list: HED-BSDS/train_pair.lst
#
image_width: 480
image_height: 480
n_channels: 3

testing data

testing:
dir: mrl_database
list: mrl_database/files.lst
#
image_width: 600
image_height: 400
n_channels: 3

I just want to use the pre-trained weights.

Memory

hello, I am a student.recently,I am reading the HED paper. I found that your code is very interesting,but some error is happened when I try to train the model with my own data.Error message show that Memory error.My computer in-build 32GB memory, would you give me some suggestion.Think you very much.

How to use the output model with Tensorflow lite for Mobile

Hi,

I followed steps in description and successfully trained a new model with my images. I would like to convert the output model to the Tensorflow Lite model for Mobile usage. I followed steps here and was able to freeze the HED model to GraphDef (.pb) with output_node_names=predictions. But I don't know how to continue to convert the GraphDef model to Tensorflow Lite model (using toco) by using toco . The reason is that I don't know where to get some parameters: input_arrays, output_arrays, input_shapes, output_node_names . I also ran the tensorboard command and saw some graphs but I don't see information of those parameters.

Can you please share me know how to export the model and use it with the Tensorflow Lite?

Thanks,
Duc

pretrained model invalid

Hi, when I run git lfs fetch && git lfs pull,
it outputs

Git LFS: (0 of 2 files) 0 B / 585.83 MB
batch response: This repository is over its data quota. Purchase more data packs to restore access.

Could you upload the model elsewhere?

Wrong loss function

In the original paper:
beta = |Y_| / |Y|
"|Y_| and |Y+| denote the edge and non-edge ground truth label sets, respectively"

This definition is really counter-intuitive to me.


In "sigmoid_cross_entropy_balanced":

    y = tf.cast(label, tf.float32)

    count_neg = tf.reduce_sum(1. - y)
    count_pos = tf.reduce_sum(y)

    # Equation [2]
    beta = count_neg / (count_neg + count_pos)

    # Equation [2] divide by 1 - beta
    pos_weight = beta / (1 - beta)

It seems that "sigmoid_cross_entropy_balanced" function in "losses.py" is wrong.

a question about weight decay?

Hello! It seems that weight decay(0.0002) in HED paper only appears in holy-edge/hed/configs/hed.yaml, and it is not used when traning data. So is it in fact? Looking forward to your reply.

Removing resizing image step

From the HED paper I understood that we don't need to resize images as the network doesn't have any fully connected layers. So for my own dataset I wanted to change your code to remove this step so it take any size image and also produces edge map of same size as image.

But just removing these lines

im = im.resize((self.cfgs['training']['image_width'], self.cfgs['training']['image_height']))
em = em.resize((self.cfgs['training']['image_width'], self.cfgs['training']['image_height']))
is giving an error.

Is it possible to do this with your code?
Any hint or pointer would be appreciated. Thank you.

Training does not converge

�[32m[08 May 2018 21h43m00s][INFO] [7428/100000] TRAINING loss : 0.1480998396873474�[0m
�[32m[08 May 2018 21h43m02s][INFO] [7429/100000] TRAINING loss : 0.16046211123466492�[0m
�[32m[08 May 2018 21h43m03s][INFO] [7430/100000] TRAINING loss : 0.15747885406017303�[0m
�[32m[08 May 2018 21h43m03s][INFO] [7430/100000] VALIDATION error : 0.2499309927225113�[0m
�[32m[08 May 2018 21h43m04s][INFO] [7431/100000] TRAINING loss : 0.14023230969905853�[0m
�[32m[08 May 2018 21h43m06s][INFO] [7432/100000] TRAINING loss : 0.15643279254436493�[0m
�[32m[08 May 2018 21h43m07s][INFO] [7433/100000] TRAINING loss : 0.1568005532026291�[0m
�[32m[08 May 2018 21h43m09s][INFO] [7434/100000] TRAINING loss : 0.13042421638965607�[0m
�[32m[08 May 2018 21h43m10s][INFO] [7435/100000] TRAINING loss : 0.13672024011611938�[0m
�[32m[08 May 2018 21h43m11s][INFO] [7436/100000] TRAINING loss : 0.16531457006931305�[0m
�[32m[08 May 2018 21h43m12s][INFO] [7437/100000] TRAINING loss : 0.1498943716287613�[0m
�[32m[08 May 2018 21h43m14s][INFO] [7438/100000] TRAINING loss : 0.1395827680826187�[0m
�[32m[08 May 2018 21h43m15s][INFO] [7439/100000] TRAINING loss : 0.16227483749389648�[0m
�[32m[08 May 2018 21h43m16s][INFO] [7440/100000] TRAINING loss : 0.14770230650901794�[0m
�[32m[08 May 2018 21h43m17s][INFO] [7440/100000] VALIDATION error : 0.24861328303813934�[0m
�[32m[08 May 2018 21h43m19s][INFO] [7441/100000] TRAINING loss : 0.13362529873847961�[0m
�[32m[08 May 2018 21h43m20s][INFO] [7442/100000] TRAINING loss : 0.1269095093011856�[0m
�[32m[08 May 2018 21h43m21s][INFO] [7443/100000] TRAINING loss : 0.15405525267124176�[0m
�[32m[08 May 2018 21h43m23s][INFO] [7444/100000] TRAINING loss : 0.1538567692041397�[0m
�[32m[08 May 2018 21h43m25s][INFO] [7445/100000] TRAINING loss : 0.16362397372722626�[0m
�[32m[08 May 2018 21h43m26s][INFO] [7446/100000] TRAINING loss : 0.15053629875183105�[0m
�[32m[08 May 2018 21h43m28s][INFO] [7447/100000] TRAINING loss : 0.1497960239648819�[0m
�[32m[08 May 2018 21h43m29s][INFO] [7448/100000] TRAINING loss : 0.14233416318893433�[0m
�[32m[08 May 2018 21h43m30s][INFO] [7449/100000] TRAINING loss : 0.13796669244766235�[0m
�[32m[08 May 2018 21h43m32s][INFO] [7450/100000] TRAINING loss : 0.13414344191551208�[0m
�[32m[08 May 2018 21h43m32s][INFO] [7450/100000] VALIDATION error : 0.2629392445087433�[0m
�[32m[08 May 2018 21h43m34s][INFO] [7451/100000] TRAINING loss : 0.16665637493133545�[0m
�[32m[08 May 2018 21h43m35s][INFO] [7452/100000] TRAINING loss : 0.14615492522716522�[0m
�[32m[08 May 2018 21h43m37s][INFO] [7453/100000] TRAINING loss : 0.09095730632543564�[0m
�[32m[08 May 2018 21h43m38s][INFO] [7454/100000] TRAINING loss : 0.15941087901592255�[0m
�[32m[08 May 2018 21h43m40s][INFO] [7455/100000] TRAINING loss : 0.15737438201904297�[0m
�[32m[08 May 2018 21h43m42s][INFO] [7456/100000] TRAINING loss : 0.15094870328903198�[0m
�[32m[08 May 2018 21h43m43s][INFO] [7457/100000] TRAINING loss : 0.1470448523759842�[0m
�[32m[08 May 2018 21h43m44s][INFO] [7458/100000] TRAINING loss : 0.15349626541137695�[0m
�[32m[08 May 2018 21h43m46s][INFO] [7459/100000] TRAINING loss : 0.12288802117109299�[0m
�[32m[08 May 2018 21h43m47s][INFO] [7460/100000] TRAINING loss : 0.1600235551595688�[0m
�[32m[08 May 2018 21h43m48s][INFO] [7460/100000] VALIDATION error : 0.2703090310096741�[0m
�[32m[08 May 2018 21h43m49s][INFO] [7461/100000] TRAINING loss : 0.13551127910614014�[0m
�[32m[08 May 2018 21h43m51s][INFO] [7462/100000] TRAINING loss : 0.15077351033687592�[0m
�[32m[08 May 2018 21h43m52s][INFO] [7463/100000] TRAINING loss : 0.13134542107582092�[0m
�[32m[08 May 2018 21h43m53s][INFO] [7464/100000] TRAINING loss : 0.14744725823402405�[0m
�[32m[08 May 2018 21h43m55s][INFO] [7465/100000] TRAINING loss : 0.14557607471942902�[0m
�[32m[08 May 2018 21h43m56s][INFO] [7466/100000] TRAINING loss : 0.1406044214963913�[0m
�[32m[08 May 2018 21h43m58s][INFO] [7467/100000] TRAINING loss : 0.1485893577337265�[0m
�[32m[08 May 2018 21h43m59s][INFO] [7468/100000] TRAINING loss : 0.1595202535390854�[0m
�[32m[08 May 2018 21h44m00s][INFO] [7469/100000] TRAINING loss : 0.16567926108837128�[0m
�[32m[08 May 2018 21h44m01s][INFO] [7470/100000] TRAINING loss : 0.15977224707603455�[0m
�[32m[08 May 2018 21h44m02s][INFO] [7470/100000] VALIDATION error : 0.19350607693195343�[0m

Any idea?

Reupload the pretrained model?

As others have pointed out before, the git lfs data quota is exhausted for the pretrained model files.

If someone could re-upload them somewhere I will gladly open a mirror to download them aswell.

what is im -= self.cfgs['mean_pixel_value'] mean?

In data_parser.py ,I find ‘ im -= self.cfgs['mean_pixel_value'] ’,
mean_pixel_value: [103.939, 116.779, 123.68]
I dont understand what is this op mean? Is it for normalization?
Can I use tf.image.per_image_standardization() instead?

About Loss Function

Hi, everyone. I read the code in data_parser.py, found that there is an option for input labels, that is "target regression". If we choose this option, the loaded ground truth will be a matrix consisting of real numbers from 0 to 1 rather than a binary matrix. After that, I checked losses.py and found that there are two lines of codes:
"count_neg = tf.reduce_sum(1. - y)"
"count_pos = tf.reduce_sum(y)"

there two lines of codes seems work well for a binary ground truth, but if they work for a ground truth that consists of real numbers from 0 to 1? I am looking forward to your answers.

train not working

Hi,
I was playing around with different image_width and height values for training and after even setting it back to 480 and 480 train functionality is not working anymore. I am getting the following error.

[10 Jan 2018 10h30m46s][INFO] Model weights loaded from vgg16.npy
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-1+SIDE-1
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-2+SIDE-2
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-3+SIDE-3
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-4+SIDE-4
[10 Jan 2018 10h30m46s][INFO] Added CONV-BLOCK-5+SIDE-5
[10 Jan 2018 10h30m46s][INFO] Added FUSE layer
[10 Jan 2018 10h30m46s][INFO] Build model finished: 0.1343s
[10 Jan 2018 10h30m46s][INFO] Done initializing VGG-16 model
[10 Jan 2018 10h30m47s][INFO] Training data set-up from /home/pchaudha/hed/hed-data/HED-BSDS/train_pair.lst
[10 Jan 2018 10h30m47s][INFO] Training samples 23040
[10 Jan 2018 10h30m47s][INFO] Validation samples 5760
[10 Jan 2018 10h30m47s][WARNING] Deep supervision application set to True
Traceback (most recent call last):
File "run-hed.py", line 64, in
main(args)
File "run-hed.py", line 38, in main
trainer.run(session)
File "/home/pchaudha/hed/hed/train.py", line 69, in run
run_metadata=run_metadata)
File "/home/pchaudha/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/pchaudha/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1093, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/home/pchaudha/.local/lib/python2.7/site-packages/numpy/core/numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.

Does anyone know why is this happening?
Thank you!

Does anyone have problems with Permission denied when running code with gpu?

Free memory: 10.76GiB
2018-10-21 22:18:39.485611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2018-10-21 22:18:39.485616: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y
2018-10-21 22:18:39.485627: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:17:00.0)
[21 Oct 2018 22h18m40s][INFO] Model weights loaded from vgg16.npy
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-1+SIDE-1
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-2+SIDE-2
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-3+SIDE-3
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-4+SIDE-4
[21 Oct 2018 22h18m40s][INFO] Added CONV-BLOCK-5+SIDE-5
[21 Oct 2018 22h18m40s][INFO] Added FUSE layer
[21 Oct 2018 22h18m40s][INFO] Build model finished: 0.1324s
[21 Oct 2018 22h18m40s][INFO] Done initializing VGG-16 model
[21 Oct 2018 22h18m40s][ERROR] Error setting up VGG-16 model, [Errno 13] Permission denied: '/home/code'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.