stephenyan1231 / caffe-public Goto Github PK
View Code? Open in Web Editor NEWa forked copy of caffe from Berkeley
License: Other
a forked copy of caffe from Berkeley
License: Other
dear all
i got error when executing "make all"
ln: target ‘.build_release/tools/upgrade_net_proto_text’ is not a directory
make: *** [.build_release/tools/upgrade_net_proto_text] Error 1
any solution ?
regards
Hi,
I saw this page: https://sites.google.com/site/homepagezhichengyan/home/hdcnn/code
and try train using CIFAR-100, but in training time, displayed loss is nan. but accuracy seems to be improved little by little. Could you kindly explain this?
Hi,
I cannot see any example that is refereed on the page:https://sites.google.com/site/homepagezhichengyan/home/hdcnn/code.
Can you tell me why?
Hi,
It seems there is no script of HDCNN in example folder.
Hi Zhicheng,
I followed the instruction in your project site, but when I was trying to build Caffe, I ran into the following errors. Before building your copy of Caffe, I have successfully build the official caffe. Hope you could help me, thanks!
PROTOC src/caffe/proto/caffe.proto
CXX .build_release/src/caffe/proto/caffe.pb.cc
CXX src/caffe/syncedmem.cpp
CXX src/caffe/data_manager.cpp
CXX src/caffe/copy_pipeline.cpp
CXX src/caffe/internal_thread.cpp
CXX src/caffe/layer_factory.cpp
CXX src/caffe/blob_solver.cpp
CXX src/caffe/data_variable_size_transformer.cpp
src/caffe/data_variable_size_transformer.cpp: In instantiation of ‘void caffe::DataVariableSizeTransformer::Transform(const cv::Mat&, caffe::Blob, int&, int&) [with Dtype = float]’:
src/caffe/data_variable_size_transformer.cpp:284:1: required from here
src/caffe/data_variable_size_transformer.cpp:173:12: warning: unused variable ‘height’ [-Wunused-variable]
const int height = transformed_blob->height();
^
src/caffe/data_variable_size_transformer.cpp:174:12: warning: unused variable ‘width’ [-Wunused-variable]
const int width = transformed_blob->width();
^
src/caffe/data_variable_size_transformer.cpp: In instantiation of ‘void caffe::DataVariableSizeTransformer::Transform(const cv::Mat&, caffe::Blob, int&, int&) [with Dtype = double]’:
src/caffe/data_variable_size_transformer.cpp:284:1: required from here
src/caffe/data_variable_size_transformer.cpp:173:12: warning: unused variable ‘height’ [-Wunused-variable]
const int height = transformed_blob->height();
^
src/caffe/data_variable_size_transformer.cpp:174:12: warning: unused variable ‘width’ [-Wunused-variable]
const int width = transformed_blob->width();
^
CXX src/caffe/blob.cpp
CXX src/caffe/stream_broadcast.cpp
CXX src/caffe/util/math_functions.cpp
CXX src/caffe/util/im2col.cpp
CXX src/caffe/util/upgrade_proto.cpp
CXX src/caffe/util/io.cpp
CXX src/caffe/util/insert_splits.cpp
CXX src/caffe/util/benchmark.cpp
CXX src/caffe/util/db.cpp
CXX src/caffe/net.cpp
CXX src/caffe/solver.cpp
CXX src/caffe/common.cpp
CXX src/caffe/layers/bnll_layer.cpp
CXX src/caffe/layers/dropout_layer.cpp
CXX src/caffe/layers/hdf5_data_layer.cpp
CXX src/caffe/layers/softmax_layer.cpp
CXX src/caffe/layers/cudnn_pooling_layer.cpp
CXX src/caffe/layers/silence_layer.cpp
CXX src/caffe/layers/image_data_layer.cpp
CXX src/caffe/layers/shift_stitch_layer.cpp
src/caffe/layers/shift_stitch_layer.cpp: In member function ‘void caffe::ShiftStitchLayer::Backward_cpu(const std::vectorcaffe::Blob<Dtype_>&, const std::vector&, const std::vectorcaffe::Blob<Dtype_>&) [with Dtype = float]’:
src/caffe/layers/shift_stitch_layer.cpp:112:26: warning: ‘src_blob’ may be used uninitialized in this function [-Wmaybe-uninitialized]
Blob src_blob, *tgt_blob;
^
src/caffe/layers/shift_stitch_layer.cpp: In member function ‘void caffe::ShiftStitchLayer::Backward_cpu(const std::vectorcaffe::Blob<Dtype>&, const std::vector&, const std::vectorcaffe::Blob<Dtype*>&) [with Dtype = double]’:
src/caffe/layers/shift_stitch_layer.cpp:112:26: warning: ‘src_blob’ may be used uninitialized in this function [-Wmaybe-uninitialized]
CXX src/caffe/layers/fine2coarseprob_layer.cpp
CXX src/caffe/layers/tanh_layer.cpp
CXX src/caffe/layers/loss_layer.cpp
CXX src/caffe/layers/infogain_loss_layer.cpp
CXX src/caffe/layers/split_layer.cpp
CXX src/caffe/layers/contrastive_loss_layer.cpp
CXX src/caffe/layers/sigmoid_layer.cpp
CXX src/caffe/layers/cudnn_sigmoid_layer.cpp
CXX src/caffe/layers/dummy_data_layer.cpp
CXX src/caffe/layers/euclidean_loss_layer.cpp
CXX src/caffe/layers/window_data_layer.cpp
CXX src/caffe/layers/cudnn_conv_layer.cpp
CXX src/caffe/layers/accuracy_layer.cpp
CXX src/caffe/layers/threshold_layer.cpp
CXX src/caffe/layers/exp_layer.cpp
CXX src/caffe/layers/flatten_layer.cpp
CXX src/caffe/layers/compact_probabilistic_average_prob_layer.cpp
CXX src/caffe/layers/argmax_layer.cpp
CXX src/caffe/layers/base_data_layer.cpp
CXX src/caffe/layers/hdf5_output_layer.cpp
CXX src/caffe/layers/conv_layer.cpp
CXX src/caffe/layers/pooling_layer.cpp
CXX src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
CXX src/caffe/layers/multinomial_logistic_loss_layer.cpp
CXX src/caffe/layers/spatial_prob_aggregation_layer.cpp
CXX src/caffe/layers/hinge_loss_layer.cpp
CXX src/caffe/layers/absval_layer.cpp
CXX src/caffe/layers/power_layer.cpp
CXX src/caffe/layers/deconv_layer.cpp
CXX src/caffe/layers/im2col_layer.cpp
CXX src/caffe/layers/lrn_layer.cpp
CXX src/caffe/layers/fine2multicoarseprob_layer.cpp
CXX src/caffe/layers/data_layer.cpp
CXX src/caffe/layers/cudnn_softmax_layer.cpp
CXX src/caffe/layers/mvn_layer.cpp
CXX src/caffe/layers/inner_product_layer.cpp
CXX src/caffe/layers/softmax_loss_layer.cpp
CXX src/caffe/layers/multinomial_logistic_sparsity_loss.cpp
CXX src/caffe/layers/spatial_accuracy_layer.cpp
CXX src/caffe/layers/slice_layer.cpp
CXX src/caffe/layers/cudnn_relu_layer.cpp
CXX src/caffe/layers/concat_layer.cpp
CXX src/caffe/layers/shift_pooling_layer.cpp
CXX src/caffe/layers/memory_data_layer.cpp
CXX src/caffe/layers/relu_layer.cpp
CXX src/caffe/layers/neuron_layer.cpp
CXX src/caffe/layers/base_conv_layer.cpp
CXX src/caffe/layers/eltwise_layer.cpp
CXX src/caffe/layers/cudnn_tanh_layer.cpp
CXX src/caffe/layers/data_variable_size_layer.cpp
CXX src/caffe/data_transformer.cpp
CXX src/caffe/blob_diff_reducer.cpp
NVCC src/caffe/util/im2col.cu
NVCC src/caffe/util/matrix_quantization.cu
NVCC src/caffe/util/math_functions.cu
NVCC src/caffe/layers/flatten_layer.cu
NVCC src/caffe/layers/dropout_layer.cu
^[NVCC src/caffe/layers/slice_layer.cu
NVCC src/caffe/layers/threshold_layer.cu
NVCC src/caffe/layers/softmax_layer.cu
NVCC src/caffe/layers/sigmoid_cross_entropy_loss_layer.cu
NVCC src/caffe/layers/im2col_layer.cu
NVCC src/caffe/layers/deconv_layer.cu
NVCC src/caffe/layers/inner_product_layer.cu
NVCC src/caffe/layers/cudnn_tanh_layer.cu
NVCC src/caffe/layers/hdf5_output_layer.cu
NVCC src/caffe/layers/pooling_layer.cu
NVCC src/caffe/layers/euclidean_loss_layer.cu
NVCC src/caffe/layers/sigmoid_layer.cu
NVCC src/caffe/layers/relu_layer.cu
NVCC src/caffe/layers/absval_layer.cu
NVCC src/caffe/layers/split_layer.cu
NVCC src/caffe/layers/contrastive_loss_layer.cu
NVCC src/caffe/layers/conv_layer.cu
NVCC src/caffe/layers/bnll_layer.cu
NVCC src/caffe/layers/cudnn_conv_layer.cu
NVCC src/caffe/layers/shift_pooling_layer.cu
NVCC src/caffe/layers/tanh_layer.cu
NVCC src/caffe/layers/base_conv_layer.cu
NVCC src/caffe/layers/cudnn_relu_layer.cu
NVCC src/caffe/layers/lrn_layer.cu
NVCC src/caffe/layers/concat_layer.cu
NVCC src/caffe/layers/eltwise_layer.cu
NVCC src/caffe/layers/hdf5_data_layer.cu
NVCC src/caffe/layers/power_layer.cu
NVCC src/caffe/layers/cudnn_pooling_layer.cu
NVCC src/caffe/layers/cudnn_softmax_layer.cu
NVCC src/caffe/layers/exp_layer.cu
NVCC src/caffe/layers/silence_layer.cu
NVCC src/caffe/layers/base_data_layer.cu
NVCC src/caffe/layers/shift_stitch_layer.cu
NVCC src/caffe/layers/cudnn_sigmoid_layer.cu
NVCC src/caffe/layers/mvn_layer.cu
AR -o .build_release/lib/libcaffe.a
LD -o .build_release/lib/libcaffe.so
CXX tools/compute_image_mean.cpp
LD .build_release/tools/compute_image_mean.o
CXX tools/finetune_net.cpp
LD .build_release/tools/finetune_net.o
CXX tools/convert_imageset.cpp
LD .build_release/tools/convert_imageset.o
CXX tools/net_speed_benchmark.cpp
LD .build_release/tools/net_speed_benchmark.o
CXX tools/extract_features.cpp
LD .build_release/tools/extract_features.o
CXX tools/caffe.cpp
LD .build_release/tools/caffe.o
CXX tools/device_query.cpp
LD .build_release/tools/device_query.o
CXX tools/extract_features_binary_output.cpp
LD .build_release/tools/extract_features_binary_output.o
CXX tools/finetune_net_match.cpp
LD .build_release/tools/finetune_net_match.o
CXX tools/upgrade_net_proto_text.cpp
LD .build_release/tools/upgrade_net_proto_text.o
CXX tools/upgrade_net_proto_binary.cpp
LD .build_release/tools/upgrade_net_proto_binary.o
CXX tools/test_net.cpp
LD .build_release/tools/test_net.o
CXX tools/train_net.cpp
LD .build_release/tools/train_net.o
CXX examples/cifar10/convert_cifar_data.cpp
LD .build_release/examples/cifar10/convert_cifar_data.o
CXX examples/mnist/convert_mnist_data.cpp
LD .build_release/examples/mnist/convert_mnist_data.o
CXX examples/cifar100/convert_cifar100_float_data.cpp
LD .build_release/examples/cifar100/convert_cifar100_float_data.o
CXX examples/siamese/convert_mnist_siamese_data.cpp
LD .build_release/examples/siamese/convert_mnist_siamese_data.o
CXX examples/imagenet/convert_imageset_selective_label.cpp
LD .build_release/examples/imagenet/convert_imageset_selective_label.o
I copy my own Makefile.configure to your caffe-public, Then make -j10
AR -o .build_release/lib/libcaffe.a
LD -o .build_release/lib/libcaffe.so
CXX/LD -o .build_release/tools/compute_image_mean.bin
CXX/LD -o .build_release/tools/extract_features_binary_output.bin
CXX/LD -o .build_release/tools/test_net.bin
CXX/LD -o .build_release/tools/finetune_net_match.bin
CXX/LD -o .build_release/tools/upgrade_net_proto_text.bin
CXX/LD -o .build_release/tools/convert_imageset.bin
CXX/LD -o .build_release/tools/caffe.bin
CXX/LD -o .build_release/tools/extract_features.bin
CXX/LD -o .build_release/tools/net_speed_benchmark.bin
CXX/LD -o .build_release/tools/device_query.bin
CXX/LD -o .build_release/tools/train_net.bin
CXX/LD -o .build_release/tools/finetune_net.bin
CXX/LD -o .build_release/tools/upgrade_net_proto_binary.bin
CXX/LD -o .build_release/examples/cifar10/convert_cifar_data.bin
CXX/LD -o .build_release/examples/imagenet/convert_imageset_selective_label.bin
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/extract_features.bin] Error 1
make: *** Waiting for unfinished jobs....
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/upgrade_net_proto_text.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/extract_features_binary_output.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/finetune_net_match.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/caffe.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/convert_imageset.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/compute_image_mean.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/tools/upgrade_net_proto_binary.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/examples/cifar10/convert_cifar_data.bin] Error 1
.build_release/lib/libcaffe.so: undefined reference to `nvtxMarkA'
collect2: error: ld returned 1 exit status
make: *** [.build_release/examples/imagenet/convert_imageset_selective_label.bin] Error 1
Hi,i don't know the caffe ,so do u have the TensorFlow edition of this code?if u have ,it will help me a lot。
Hi, Zhicheng,
I successfully build caffe using your tutorial here: https://sites.google.com/site/homepagezhichengyan/home/hdcnn/code, but when running the example of cifar100 in the 2nd step(./examples/cifar100/train_cifar100_NIN_float_crop_v2_train_val.sh, there is some strange error, as the following shows, I think it may be the problem of multiple GPUs, as I can run single experiment using one GPU in the Caffe's official example. Could you please give me some advices? Thank you very much!
I0302 18:59:01.165179 5920 caffe.cpp:105] Use GPUs with device IDs below
I0302 18:59:01.165335 5920 caffe.cpp:107] device id 0
I0302 18:59:01.165354 5920 caffe.cpp:107] device id 1
I0302 18:59:01.165369 5920 caffe.cpp:117] Starting Optimization
I0302 18:59:11.525671 5920 solver.cpp:77] Creating training net from net file: models/cifar100_NIN_float_crop_v2/train_val/train_test.prototxt
I0302 18:59:11.525739 5920 upgrade_proto.cpp:928] start ReadNetParamsFromTextFileOrDie
I0302 18:59:11.526916 5920 solver.cpp:80] create net
I0302 18:59:11.527045 5920 net.cpp:475] The NetState phase (0) differed from the phase (1) specified by a rule in layer cifar
I0302 18:59:11.527104 5920 net.cpp:475] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0302 18:59:11.527258 5920 data_transformer.cpp:24] Loading mean file from: data/cifar100/float_mean.binaryproto
I0302 18:59:11.530076 5920 db.cpp:20] Opened leveldb examples/cifar100/cifar100-float-train-train-val-leveldb/cifar100-train-leveldb
I0302 18:59:11.530103 5920 data_manager.cpp:97] new database cursor
I0302 18:59:11.530953 5920 data_manager.cpp:99] new database transaction
*** Aborted at 1456963151 (unix time) try "date -d @1456963151" if you are using GNU date ***
PC: @ 0x7f9d86a53644 leveldb::(anonymous namespace)::MergingIterator::key()
*** SIGSEGV (@0x18) received by PID 5920 (TID 0x7f9d8c7339c0) from PID 24; stack trace: ***
@ 0x7f9d80d81670 (unknown)
@ 0x7f9d86a53644 leveldb::(anonymous namespace)::MergingIterator::key()
@ 0x7f9d86a3dc7e leveldb::(anonymous namespace)::DBIter::key()
@ 0x4fb102 caffe::db::LevelDBCursor::key()
@ 0x535fa1 caffe::DataManager<>::DataManager()
@ 0x5496b2 caffe::Net<>::InitDataManager()
@ 0x5676be caffe::Net<>::Init()
@ 0x567880 caffe::Net<>::Net()
@ 0x573eab caffe::Solver<>::InitTrainNet()
@ 0x574ebc caffe::Solver<>::Init()
@ 0x575046 caffe::Solver<>::Solver()
@ 0x4229f0 caffe::GetSolver<>()
@ 0x41c1f8 train()
@ 0x414091 main
@ 0x7f9d80d6db15 __libc_start_main
@ 0x41bd6d (unknown)
./examples/cifar100/train_cifar100_NIN_float_crop_v2_train_val.sh: line 5: 5920 Segmentation fault (core dumped) GLOG_logtostderr=1 ./build/tools/caffe train --solver=models/cifar100_NIN_float_crop_v2/train_val/solver.prototxt
I0302 18:59:13.376158 6839 caffe.cpp:105] Use GPUs with device IDs below
I0302 18:59:13.376302 6839 caffe.cpp:107] device id 0
I0302 18:59:13.376323 6839 caffe.cpp:107] device id 1
I0302 18:59:13.376339 6839 caffe.cpp:117] Starting Optimization
I0302 18:59:24.075724 6839 solver.cpp:77] Creating training net from net file: models/cifar100_NIN_float_crop_v2/train_val/train_test.prototxt
I0302 18:59:24.075798 6839 upgrade_proto.cpp:928] start ReadNetParamsFromTextFileOrDie
I0302 18:59:24.076957 6839 solver.cpp:80] create net
I0302 18:59:24.077093 6839 net.cpp:475] The NetState phase (0) differed from the phase (1) specified by a rule in layer cifar
I0302 18:59:24.077142 6839 net.cpp:475] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0302 18:59:24.077369 6839 data_transformer.cpp:24] Loading mean file from: data/cifar100/float_mean.binaryproto
I0302 18:59:24.080384 6839 db.cpp:20] Opened leveldb examples/cifar100/cifar100-float-train-train-val-leveldb/cifar100-train-leveldb
I0302 18:59:24.080411 6839 data_manager.cpp:97] new database cursor
I0302 18:59:24.081198 6839 data_manager.cpp:99] new database transaction
*** Aborted at 1456963164 (unix time) try "date -d @1456963164" if you are using GNU date ***
PC: @ 0x7f9b196d2644 leveldb::(anonymous namespace)::MergingIterator::key()
*** SIGSEGV (@0x18) received by PID 6839 (TID 0x7f9b1f3b29c0) from PID 24; stack trace: ***
@ 0x7f9b13a00670 (unknown)
@ 0x7f9b196d2644 leveldb::(anonymous namespace)::MergingIterator::key()
@ 0x7f9b196bcc7e leveldb::(anonymous namespace)::DBIter::key()
@ 0x4fb102 caffe::db::LevelDBCursor::key()
@ 0x535fa1 caffe::DataManager<>::DataManager()
@ 0x5496b2 caffe::Net<>::InitDataManager()
@ 0x5676be caffe::Net<>::Init()
@ 0x567880 caffe::Net<>::Net()
@ 0x573eab caffe::Solver<>::InitTrainNet()
@ 0x574ebc caffe::Solver<>::Init()
@ 0x575046 caffe::Solver<>::Solver()
@ 0x4229f0 caffe::GetSolver<>()
@ 0x41c1f8 train()
@ 0x414091 main
@ 0x7f9b139ecb15 __libc_start_main
@ 0x41bd6d (unknown)
./examples/cifar100/train_cifar100_NIN_float_crop_v2_train_val.sh: line 8: 6839 Segmentation fault (core dumped) GLOG_logtostderr=1 ./build/tools/caffe train --solver=models/cifar100_NIN_float_crop_v2/train_val/solver_lr1.prototxt --snapshot=models/cifar100_NIN_float_crop_v2/train_val/cifar100_NIN_float_crop_v2_iter_100000.solverstate
I0302 18:59:25.059556 7868 caffe.cpp:105] Use GPUs with device IDs below
I0302 18:59:25.059707 7868 caffe.cpp:107] device id 0
I0302 18:59:25.059726 7868 caffe.cpp:107] device id 1
I0302 18:59:25.059741 7868 caffe.cpp:117] Starting Optimization
I0302 18:59:35.792002 7868 solver.cpp:77] Creating training net from net file: models/cifar100_NIN_float_crop_v2/train_val/train_test.prototxt
I0302 18:59:35.792084 7868 upgrade_proto.cpp:928] start ReadNetParamsFromTextFileOrDie
I0302 18:59:35.793494 7868 solver.cpp:80] create net
I0302 18:59:35.793649 7868 net.cpp:475] The NetState phase (0) differed from the phase (1) specified by a rule in layer cifar
I0302 18:59:35.793747 7868 net.cpp:475] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy
I0302 18:59:35.793915 7868 data_transformer.cpp:24] Loading mean file from: data/cifar100/float_mean.binaryproto
I0302 18:59:35.796743 7868 db.cpp:20] Opened leveldb examples/cifar100/cifar100-float-train-train-val-leveldb/cifar100-train-leveldb
I0302 18:59:35.796777 7868 data_manager.cpp:97] new database cursor
I0302 18:59:35.797546 7868 data_manager.cpp:99] new database transaction
*** Aborted at 1456963175 (unix time) try "date -d @1456963175" if you are using GNU date ***
PC: @ 0x7f10c8d4a644 leveldb::(anonymous namespace)::MergingIterator::key()
*** SIGSEGV (@0x18) received by PID 7868 (TID 0x7f10cea2a9c0) from PID 24; stack trace: ***
@ 0x7f10c3078670 (unknown)
@ 0x7f10c8d4a644 leveldb::(anonymous namespace)::MergingIterator::key()
@ 0x7f10c8d34c7e leveldb::(anonymous namespace)::DBIter::key()
@ 0x4fb102 caffe::db::LevelDBCursor::key()
@ 0x535fa1 caffe::DataManager<>::DataManager()
@ 0x5496b2 caffe::Net<>::InitDataManager()
@ 0x5676be caffe::Net<>::Init()
@ 0x567880 caffe::Net<>::Net()
@ 0x573eab caffe::Solver<>::InitTrainNet()
@ 0x574ebc caffe::Solver<>::Init()
@ 0x575046 caffe::Solver<>::Solver()
@ 0x4229f0 caffe::GetSolver<>()
@ 0x41c1f8 train()
@ 0x414091 main
@ 0x7f10c3064b15 __libc_start_main
@ 0x41bd6d (unknown)
./examples/cifar100/train_cifar100_NIN_float_crop_v2_train_val.sh: line 11: 7868 Segmentation fault (core dumped) GLOG_logtostderr=1 ./build/tools/caffe train --solver=models/cifar100_NIN_float_crop_v2/train_val/solver_lr2.prototxt --snapshot=models/cifar100_NIN_float_crop_v2/train_val/cifar100_NIN_float_crop_v2_iter_115000.solverstate
it seems the folder does not exist anymore
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.