Giter VIP home page Giter VIP logo

intrinsic-dimension's People

Contributors

bhenhsi avatar rquber avatar yosinski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

intrinsic-dimension's Issues

No squeezenet_fastfood

The train_distributed.py script imports from model_builders.py the function build_squeezenet_fastfood, but this is not present there, nor in any other file in the repository. Could you please add it?

Training on ImageNet yielding whether exploding or constant loss

I am trying to train the squeezenet or alexnet architecture on a part of ImageNet dataset (in particular, using just a small number of classes). I tried with many choices of learning rate, and all the possible optimizers; in all cases, even adding regularization, the network does not seem to be able to learn. With some combinations, the loss in diverging, while with others, it is remaining roughly constant.
I am training on a machine with 4 GPUs.

Do you know possible reasons of this problem?

Reproducing CIFAR-10, Resnet-18, Pytorch

Hi @yosinski, @rquber, @mimosavvy

I've attempted to reproduce Figure S14 (see figure below) in arxiv version of the paper (https://arxiv.org/pdf/1804.08838.pdf), where you estimate intrinsic dimension on CIFAR-10 using ResNet.

resnet_paper

I used ResNet-18 from torchvision.models, fastfood transform, lr=0.0003, batch_size=32, ADAM optimizer, no regularisation, no learning rate schedule. The results I achieved are below.

cifar10

Would appreciate if you could answer the following questions:

  1. What was the learning rate used for your experiment?
  2. Did you use learning rate schedule?
  3. What was the optimizer used for CIFAR-10+Resnet experiment? (ADAM or SGD, as you do mention in Figure S10 captions that ADAM produces higher int dim)
  4. Generally, to estimate intrinsic dimension for how many epochs/iterations do you train? Do you use a stopping criteria?
  5. ResNet-18 I used differs from the resnet you used for the experiment. In your case model was "20-layer structure of ResNet with 280k parameters". I was expecting to see that larger Resnet-18 would actually have lower intrinsic dim. Any comments on this?

Basic Info.

I don't know if this would help anyone to understand fast random projections:
http://md2020.eu5.org/wht1.html
I wrote it for html5 practice and I can't say it works in anything other than the pale moon browser.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.