Giter VIP home page Giter VIP logo

Comments (3)

MiguelMonteiro avatar MiguelMonteiro commented on July 17, 2024

I should be:

SPATIAL_DIMS=3, INPUT_CHANNELS=NUM_CLASSES and REFERENCE_CHANNELS=3

NUM_CLASSES should be 2 if using a softmax in the classification layer or 1 if using a sigmoid.

from permutohedral_lattice.

XYZ-916 avatar XYZ-916 commented on July 17, 2024

Got it. Thanks!

from permutohedral_lattice.

zxpeter avatar zxpeter commented on July 17, 2024

Hi,
I use SPATIAL_DIMS=2(2D image), INPUT_CHANNELS=2(softmax) and REFERENCE_CHANNELS=3(3 channel image) for my scripts and it still get errors like below:


is not a failure, but may mean that there could be performance gains if more memory were available.
2019-05-15 00:34:17.253496: E tensorflow/stream_executor/cuda/cuda_dnn.cc:332] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
*** Received signal 11 ***
*** BEGIN MANGLED STACK TRACE ***
2019-05-15 00:34:17.584412: E tensorflow/stream_executor/cuda/cuda_driver.cc:903] failed to allocate 3.85G (4132822528 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-05-15 00:34:17.584468: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_3_bfc) ran out of memory trying to allocate 3.63GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(+0x692cbb)[0x7fc08f934cbb]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fc0cbb60390]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow14LaunchConv2DOpIN5Eigen9GpuDeviceEfEclEPNS_15OpKernelContextEbbRKNS_6TensorES8_iiiiRKNS_7PaddingEPS6_NS_12TensorFormatE+0x13e6)[0x7fc094c23656]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow8Conv2DOpIN5Eigen9GpuDeviceEfE7ComputeEPNS_15OpKernelContextE+0x3ec)[0x7fc094c28d5c]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZN10tensorflow13BaseGPUDevice13ComputeHelperEPNS_8OpKernelEPNS_15OpKernelContextE+0x37d)[0x7fc08f8623dd]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZN10tensorflow13BaseGPUDevice7ComputeEPNS_8OpKernelEPNS_15OpKernelContextE+0x8d)[0x7fc08f8628fd]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(+0x60d08d)[0x7fc08f8af08d]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(+0x60d89a)[0x7fc08f8af89a]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZN5Eigen26NonBlockingThreadPoolTemplIN10tensorflow6thread16EigenEnvironmentEE10WorkerLoopEi+0x21a)[0x7fc08f90de2a]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZNSt17_Function_handlerIFvvEZN10tensorflow6thread16EigenEnvironment12CreateThreadESt8functionIS0_EEUlvE_E9_M_invokeERKSt9_Any_data+0x32)[0x7fc08f90ced2]
/home/guan/anaconda3/bin/../lib/libstdc++.so.6(+0xafc5c)[0x7fc0b9395c5c]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7fc0cbb566ba]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fc0cb88c41d]
*** END MANGLED STACK TRACE ***

*** Begin stack trace ***
	tensorflow::CurrentStackTrace()
	
	
	tensorflow::LaunchConv2DOp<Eigen::GpuDevice, float>::operator()(tensorflow::OpKernelContext*, bool, bool, tensorflow::Tensor const&, tensorflow::Tensor const&, int, int, int, int, tensorflow::Padding const&, tensorflow::Tensor*, tensorflow::TensorFormat)
	tensorflow::Conv2DOp<Eigen::GpuDevice, float>::Compute(tensorflow::OpKernelContext*)
	tensorflow::BaseGPUDevice::ComputeHelper(tensorflow::OpKernel*, tensorflow::OpKernelContext*)
	tensorflow::BaseGPUDevice::Compute(tensorflow::OpKernel*, tensorflow::OpKernelContext*)
	
	
	Eigen::NonBlockingThreadPoolTempl<tensorflow::thread::EigenEnvironment>::WorkerLoop(int)
	std::_Function_handler<void (), tensorflow::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&)
	
	
	clone
*** End stack trace ***
Aborted (core dumped)

Any help will be appreciate.
Thanks for your time.

from permutohedral_lattice.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.