Giter VIP home page Giter VIP logo

fooling's People

Contributors

anguyen8 avatar yosinski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fooling's Issues

boost_mpi

The "sferes" needs boost_mpi. When I installed boost_1.57.0, there is mpi "include file", but there is no boost_mpi lib. I serarch the internet.
"
adding the following line to your user-config.jam file
using mpi ;
then
bjam --with-mpi
"
But I cann't find user-config.jam file in the boost path. I have installed MVAPICH2 2.0。

ImageDataLayer: Check failed: num_images <= batch_size (10 vs. 1)

I was attempting to reproduce the results of your excellent paper on my dataset, and set up the environment as mentioned in the guide. However, on running the given experiment with ./build/default/exp/images/images 1 caffe throws the error
image_data_layer.cpp:328] Check failed: num_images <= batch_size (10 vs. 1) The number of added images 10 must be no greater than the batch size 1

This happens both on the using the provided prototxt file as well as on my model. Is the value 10 hardcoded somewhere? I figured that vector<cv::Mat>& images being passed to AddImagesAndLabels in image_data_layer.cpp is of size 10, but cannot determine from where the function is being called.

layers {
  name: "data"
  type: IMAGE_DATA
  top: "data"
  top: "label"
  image_data_param {
    source: "/home/ambar/Dropbox/code/prev-paper-1/fooling/sferes/exp/images/gtsrb/labels.txt"
    mean_file: "/home/ambar/Dropbox/code/prev-paper-1/fooling/sferes/exp/images/gtsrb/mean.binaryproto"
    batch_size: 1
    new_height: 32
    new_width: 32
    images_in_color: false
  }
}
layers {
  name: "conv1"
[.......................] other layers
}

labels.txt:

/home/ambar/Dropbox/code/prev-paper-1/fooling/sferes/exp/images/gtsrb/20.png 1

Docker image?

Do you think it would be possible to publish a Docker image for this experiment?

A request

Hello!
I was totally astonished by your program and at once wanted to play with deep neural networks.
But I'm just an ordinary user, and a process of installation seems incomprehensible to me. Could you please make something like an installer with GUI?

Error with MPI on Ubuntu

Hi I am trying to generate some example for MNIST locally. But Seems I can't get MPI work. Could you provide me some clue to fix it?

this user in file runtime/orte_init.c at line 128
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_ess_set_name failed
  --> Returned value A system-required executable either could not be found or was not executable by this user (-127) instead of ORTE_SUCCESS
--------------------------------------------------------------------------

Unable to generate MNIST example

After I change dl_images.hpp, dl_map_elites_images_mnist.cpp and generated MNIST images, I compiled the program. I get segfault. But when I run caffe independently, it works well. I opened GLOG, the output is like

I0525 23:13:19.182551 27365 net.cpp:207] Collecting Learning Rate and Weight Decay.
I0525 23:13:19.182559 27365 net.cpp:130] Network initialization done.
I0525 23:13:19.182565 27365 net.cpp:131] Memory required for data: 0
Segmentation fault (core dumped)

std::signbit<float> is not allowed

Install Caffe on Ubuntu 14.04 with Cuda 6.5 gives the following error.

/usr/local/cuda/bin/nvcc -ccbin=g++ -Xcompiler -fPIC -DNDEBUG -O2 -I/usr/local/include/python2.7 -I/usr/local/lib/python2.7/dist-packages/numpy/core/include -I/usr/local/include -Ibuild/src -I./src -I./include -I/usr/local/cuda/include -gencode arch=compute_20,code=sm_20 -gencode arch=compute_20,code=sm_21 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -c src/caffe/util/math_functions.cu -o build/src/caffe/util/math_functions.cuo

Error 1:
src/caffe/util/math_functions.cu(140): error: calling a __host__ function("std::signbit<float> ") from a __global__ function("caffe::sgnbit_kernel<float> ") is not allowed

Error 2:
src/caffe/util/math_functions.cu(140): error: calling a __host__ function("std::signbit<double> ") from a __global__ function("caffe::sgnbit_kernel<double> ") is not allowed

2 errors detected in the compilation of "/tmp/tmpxft_00003368_00000000-12_math_functions.compute_35.cpp1.ii".
make: *** [build/src/caffe/util/math_functions.cuo] Error 2

The fix is here: http://stackoverflow.com/questions/28985551/caffe-installation-in-ubuntu-14-04

In caffe/include/caffe/util/math_functions.hpp

try changing

using std::signbit;
DEFINE_CAFFE_CPU_UNARY_FUNC(sgnbit, y[i] = signbit(x[i])); 

to

// using std::signbit;
DEFINE_CAFFE_CPU_UNARY_FUNC(sgnbit, y[i] = std::signbit(x[i]));

Error in step "caffe make runtest" and "run mnist experiment"

system: Ubuntu 14.04 LTS

I use "dl_map_elites_images_mnist_direct_encoding.cpp" to do the test. The error seems to be caused by "caffe::OpenCVImageToDatum()" and "ReadImageToDatum()" :

Run "caffe make runtest -j16":

[ RUN ] FormatTest/0.TestOpenCVImageToDatum
F0307 14:24:24.129835 29969 format.cpp:17] Check failed: image.data Image data must not be NULL
*** Check failure stack trace: ***
@ 0x2af695d6fdaa (unknown)
@ 0x2af695d6fce4 (unknown)
@ 0x2af695d6f6e6 (unknown)
@ 0x2af695d72687 (unknown)
@ 0x5b2316 caffe::OpenCVImageToDatum()
@ 0x429f87 caffe::FormatTest_TestOpenCVImageToDatum_Test<>::TestBody()
@ 0x571083 testing::internal::HandleExceptionsInMethodIfSupported<>()
@ 0x567c67 testing::Test::Run()
@ 0x567d0e testing::TestInfo::Run()
@ 0x567e15 testing::TestCase::Run()
@ 0x56b158 testing::internal::UnitTestImpl::RunAllTests()
@ 0x56b3e7 testing::UnitTest::Run()
@ 0x414030 main
@ 0x2af698338f45 (unknown)
@ 0x41a787 (unknown)
@ (nil) (unknown)
make: *** [runtest] Aborted (core dumped)

Run the executable file:

[ images ] $ ./images
[A]:
sferes2 version: (const char*)"0.1"
seed: 1488897135
F0307 14:22:50.208117 29366 image_data_layer.cpp:208] Check failed: ReadImageToDatum(lines_[lines_id_].first, lines_[lines_id_].second, new_height, new_width, images_in_color, &datum)
*** Check failure stack trace: ***
@ 0x7f5e5f069daa (unknown)
@ 0x7f5e5f069ce4 (unknown)
@ 0x7f5e5f0696e6 (unknown)
@ 0x7f5e5f06c687 (unknown)
@ 0x7f5e5fa0f28b caffe::ImageDataLayer<>::SetUp()
@ 0x7f5e5f9ecc68 caffe::Net<>::Init()
@ 0x7f5e5f9edcc5 caffe::Net<>::Net()
@ 0x42846b sferes::fit::FitMapDeepLearning<>::_setProbabilityList()
@ 0x428c37 sferes::eval::Eval<>::eval<>()
@ 0x42e48d sferes::ea::MapElite<>::random_pop()
@ 0x42e718 sferes::ea::Ea<>::run()
@ 0x41a037 main
@ 0x7f5e5d3d3f45 (unknown)
@ 0x4189f9 (unknown)
@ (nil) (unknown)
Aborted (core dumped)

no attribute 'backward_from_layer'

when i run find_fooling_image.py
This came: AttributeError: 'Classifier' object has no attribute 'backward_from_layer'.
How can i fix it?

Could not run ./waf --exp images

Hi,
I have cloned /fooling in my machine. I have setup all necessary requirements. I jdon't have nVidia Graphics card so I didn't install CUDA. In ./wscript in commented out #obj.includes = '. ../../ /usr/local/cuda-6.0/include. I successfully do ./waf configure and ./waf build. I tried ./waf --exp images. I got following silly errors. Please help me out.

[39/94] cxx: exp/images/dl_map_elites_images_mnist.cpp -> build/debug/exp/images/dl_map_elites_images_mnist_1.o
In file included from ../exp/images/dl_map_elites_images_mnist.cpp:1:0:
../exp/images/dl_images.hpp:5:38: fatal error: sferes/phen/parameters.hpp: No such file or directory
#include <sferes/phen/parameters.hpp>
^
compilation terminated.
In file included from ../exp/images/dl_map_elites_images_mnist.cpp:1:0:
../exp/images/dl_images.hpp:5:38: fatal error: sferes/phen/parameters.hpp: No such file or directory
#include <sferes/phen/parameters.hpp>
^
compilation terminated.
Waf: Leaving directory `/home/ashiq/Desktop/fooling/sferes/build'
Build failed:
-> task failed (err #1):
{task: cxx dl_map_elites_images_mnist.cpp -> dl_map_elites_images_mnist_1.o}
-> task failed (err #1):
{task: cxx dl_map_elites_images_mnist.cpp -> dl_map_elites_images_mnist_1.o}

I will appreciate any help.

ERROR occurs when type command: make all

Hi, I have a problem when installing caffe.

When I use command:

make all

I get:

find: ‘examples’: No such file or directory
find: ‘examples’: No such file or directory
protoc --proto_path=src --cpp_out=.build_release/src src/caffe/proto/caffe.proto
make: protoc: Command not found
make: *** [.build_release/src/caffe/proto/caffe.pb.cc] Error 127

Is there any other dependencies that need to be installed? I don't exactly know what "example" referring to. And I am not sure what protoc is.

ld cannot find -lcaffe

When running ./waf -exp images and the following wscript
image
ld cannot find -lcaffe option

I tried to remove caffe from the obj.lib list and add the linking option on cxxflags but that way the caffe libraries are not seen properly

any ideas of what is going on? I was able to compile caffe and all the step before running the experiment with: waf -exp images. I am using ubuntu 14.04 on a virtual machine...

Unable to load .caffemodel weights

Hi,

I am a beginner in deep learning and programming. I have trained a version of mnist Lenet according to Caffe instructions. However I am unable to load the .caffemodel weights into your experiment despite ensuring the prototxt layers have the same names (after building and running the debug version, I get ~0.10 confidence for all labels, so I am guessing the weights were randomly initialised without the pretrained .caffemodel weights). I suspect this is due to updated Caffe version I am using.

It is mentioned in the installation guide that:
"The specific version provided is different from the Caffe master branch and it has the
modification that enables feeding OpenCV data from memory to a Caffe model for
evaluation via ImageDataLayer."
May I know whether it is possible to port the modification over to the new Caffe version?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.