Giter VIP home page Giter VIP logo

Comments (19)

agibsonccc avatar agibsonccc commented on May 27, 2024

@YusifCodes it tells you what the error is. This looks like it thinks it's a custom optimizer. I'm not even sure where that's coming from. It might be a newer optimizer in keras. I'll need more context than what you're telling me here. You're saying you replaced it but the error message is contradicting you here. It's named "SGD scan"? For anything with model import don't always just assume that it blanket doesn't work. Always pay attention to both versions.

from deeplearning4j.

YusifCodes avatar YusifCodes commented on May 27, 2024

I am pretty sure adam and sgd are not custom, am I wrong?

from deeplearning4j.

agibsonccc avatar agibsonccc commented on May 27, 2024

@YusifCodes yeah they are. I'm guessing it's a new version issue. That error message is strange. Are you using keras 3.0 or something?

from deeplearning4j.

YusifCodes avatar YusifCodes commented on May 27, 2024

Name: keras Version: 2.14.0 Summary: Deep learning for humans. Home-page: https://keras.io/ Author: Keras team Author-email: [email protected] License: Apache 2.0 Location: c:\users\user\appdata\local\programs\python\python310\lib\site-packages Requires: Required-by: tensorflow-intel

My keras package.

Do you have any suggestions?

I was using the Adam optimizer before this and in the error was totally the same, except it was like this Custom>Adamcan for the name.

from deeplearning4j.

agibsonccc avatar agibsonccc commented on May 27, 2024

Yeah that's definitely odd...that's new. I'll treat that as the main issue for now. For now try 2.12 or something. It looks like this thi sis the intel fork?

from deeplearning4j.

YusifCodes avatar YusifCodes commented on May 27, 2024

I'll try 2.12 now, I will tell you how it goes, thanks.
Not sure what you mean about it being an intel fork.

from deeplearning4j.

agibsonccc avatar agibsonccc commented on May 27, 2024

@YusifCodes oh sorry that's the package requirement. I see now. Either way, the hardware vendors tend to publish forks of the python frameworks. That's what I thought that was. Try a few different older versions and see what happens. There's no reason one of them shouldn't work.

from deeplearning4j.

YusifCodes avatar YusifCodes commented on May 27, 2024

Hello there, sorry for a late reply. I got it working with 2.12 last friday. But after making some minor altercations to my model, saving it and trying to run it in java I am getting the same error. I tried 2.11, 2.13.1 still no luck. Thanks.

from deeplearning4j.

agibsonccc avatar agibsonccc commented on May 27, 2024

Did you resave it with keras 2.14 again?

from deeplearning4j.

YusifCodes avatar YusifCodes commented on May 27, 2024

Nope, I resaved it with .11, .12. .13.1 it is the same everywhere.

from deeplearning4j.

ParshinAlex avatar ParshinAlex commented on May 27, 2024

Guys, for me it's the same problem, I was debugging it, and what I saw is that problem possibly can be in dl4j 1.0.0-M2.1 module, in particular in class KerasOptimizerUtils (package org.deeplearning4j.nn.modelimport.keras.utils).

I guess some time ago code in Keras was changed to that names of the optimizers were changed from simple Adam, SGD and so on to Custom>Adam and so on. This change is treated by current code on master of dl4j, BUT in the decompiled class KerasOptimizerUtils, which lays in .jar of version 1.0.0-M2.1 I see (possibly) some old code, which does not treat this change - and produces and exception mentioned by YusifCodes.

Here is the code:

public KerasOptimizerUtils() {
}

public static IUpdater mapOptimizer(Map<String, Object> optimizerConfig) throws UnsupportedKerasConfigurationException, InvalidKerasConfigurationException {
    if (!optimizerConfig.containsKey("class_name")) {
        throw new InvalidKerasConfigurationException("Optimizer config does not contain a name field.");
    } else {
        String optimizerName = (String)optimizerConfig.get("class_name");
        if (!optimizerConfig.containsKey("config")) {
            throw new InvalidKerasConfigurationException("Field config missing from layer config");
        } else {
            Map<String, Object> optimizerParameters = (Map)optimizerConfig.get("config");
            Object dl4jOptimizer;
            double lr;
            double rho;
            double epsilon;
            double decay;
            double scheduleDecay;
            switch (optimizerName) {
                case "Adam":
                    lr = (Double)(optimizerParameters.containsKey("lr") ? optimizerParameters.get("lr") : optimizerParameters.get("learning_rate"));
                    rho = (Double)optimizerParameters.get("beta_1");
                    epsilon = (Double)optimizerParameters.get("beta_2");
                    decay = (Double)optimizerParameters.get("epsilon");
                    scheduleDecay = (Double)optimizerParameters.get("decay");
                    dl4jOptimizer = (new Adam.Builder()).beta1(rho).beta2(epsilon).epsilon(decay).learningRate(lr).learningRateSchedule(scheduleDecay == 0.0 ? null : new InverseSchedule(ScheduleType.ITERATION, lr, scheduleDecay, 1.0)).build();
                    break;
                case "Adadelta":
                    lr = (Double)optimizerParameters.get("rho");
                    rho = (Double)optimizerParameters.get("epsilon");
                    dl4jOptimizer = (new AdaDelta.Builder()).epsilon(rho).rho(lr).build();
                    break;
                case "Adgrad":
                    lr = (Double)(optimizerParameters.containsKey("lr") ? optimizerParameters.get("lr") : optimizerParameters.get("learning_rate"));
                    rho = (Double)optimizerParameters.get("epsilon");
                    epsilon = (Double)optimizerParameters.get("decay");
                    dl4jOptimizer = (new AdaGrad.Builder()).epsilon(rho).learningRate(lr).learningRateSchedule(epsilon == 0.0 ? null : new InverseSchedule(ScheduleType.ITERATION, lr, epsilon, 1.0)).build();
                    break;
                case "Adamax":
                    lr = (Double)(optimizerParameters.containsKey("lr") ? optimizerParameters.get("lr") : optimizerParameters.get("learning_rate"));
                    rho = (Double)optimizerParameters.get("beta_1");
                    epsilon = (Double)optimizerParameters.get("beta_2");
                    decay = (Double)optimizerParameters.get("epsilon");
                    dl4jOptimizer = new AdaMax(lr, rho, epsilon, decay);
                    break;
                case "Nadam":
                    lr = (Double)(optimizerParameters.containsKey("lr") ? optimizerParameters.get("lr") : optimizerParameters.get("learning_rate"));
                    rho = (Double)optimizerParameters.get("beta_1");
                    epsilon = (Double)optimizerParameters.get("beta_2");
                    decay = (Double)optimizerParameters.get("epsilon");
                    scheduleDecay = (Double)optimizerParameters.getOrDefault("schedule_decay", 0.0);
                    dl4jOptimizer = (new Nadam.Builder()).beta1(rho).beta2(epsilon).epsilon(decay).learningRate(lr).learningRateSchedule(scheduleDecay == 0.0 ? null : new InverseSchedule(ScheduleType.ITERATION, lr, scheduleDecay, 1.0)).build();
                    break;
                case "SGD":
                    lr = (Double)(optimizerParameters.containsKey("lr") ? optimizerParameters.get("lr") : optimizerParameters.get("learning_rate"));
                    rho = (Double)(optimizerParameters.containsKey("epsilon") ? optimizerParameters.get("epsilon") : optimizerParameters.get("momentum"));
                    epsilon = (Double)optimizerParameters.get("decay");
                    dl4jOptimizer = (new Nesterovs.Builder()).momentum(rho).learningRate(lr).learningRateSchedule(epsilon == 0.0 ? null : new InverseSchedule(ScheduleType.ITERATION, lr, epsilon, 1.0)).build();
                    break;
                case "RMSprop":
                    lr = (Double)(optimizerParameters.containsKey("lr") ? optimizerParameters.get("lr") : optimizerParameters.get("learning_rate"));
                    rho = (Double)optimizerParameters.get("rho");
                    epsilon = (Double)optimizerParameters.get("epsilon");
                    decay = (Double)optimizerParameters.get("decay");
                    dl4jOptimizer = (new RmsProp.Builder()).epsilon(epsilon).rmsDecay(rho).learningRate(lr).learningRateSchedule(decay == 0.0 ? null : new InverseSchedule(ScheduleType.ITERATION, lr, decay, 1.0)).build();
                    break;
                default:
                    throw new UnsupportedKerasConfigurationException("Optimizer with name " + optimizerName + "can not bematched to a DL4J optimizer. Note that custom TFOptimizers are not supported by model import");
            }

            return (IUpdater)dl4jOptimizer;
        }
    }
}

}

As you can see, it expects simple names of optimizers, and no treating of Custom>... is here, which is different from code on master.

agibsonccc, please, can you check this and tell, whether it's only my problem with versions, or is it really old code in 1.0.0-M2.1 version in class KerasOptimizerUtils?

from deeplearning4j.

treo avatar treo commented on May 27, 2024

No need to go and decompile anything. You can literally see the code as it was when M2.1 was released: https://github.com/deeplearning4j/deeplearning4j/blob/1.0.0-M2.1/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasOptimizerUtils.java

This is the PR that addressed the changes in Keras: #9939

from deeplearning4j.

ParshinAlex avatar ParshinAlex commented on May 27, 2024

Treo, will this change go to next release or can we use it somehow?

from deeplearning4j.

agibsonccc avatar agibsonccc commented on May 27, 2024

@ParshinAlex I'll fix whatever is going on here in the next release. I'm on the tail end of more important testing (the underlying cuda internals) that's unfortunately gone on longer than I'd like but I've hit 90% of the milestones I need for that and will turn my attention to some of the minor issues like this next. Unfortunately unless you're willing to be part of the solution either via being a paying customer of mine or writing the code yourself you'll just have to wait and downgrade for now.

from deeplearning4j.

ParshinAlex avatar ParshinAlex commented on May 27, 2024

@agibsonccc @treo Thank you for the explanations and effort, now it's clear

from deeplearning4j.

Kali-Zoidberg avatar Kali-Zoidberg commented on May 27, 2024

I was also running into this problem but resolved it by setting the enforceTrainingConfig param in importKerasSequentialModelAndWeight to false (as I'm only using the model for inference)

KerasModelImport.importKerasSequentialModelAndWeights(configPath + "model.h5", false);

Edit: It solves the runtime issue but the model gets hung up on loading.

from deeplearning4j.

agibsonccc avatar agibsonccc commented on May 27, 2024

That doesn't happen for random reasons. Whatever is going on there might be model specific. Code doesn't just mysteriously "hang". It has a reason. Can you look in to this using jstack or post the model somewhere? I don't need anything secret just something to reproduce it.

from deeplearning4j.

Kali-Zoidberg avatar Kali-Zoidberg commented on May 27, 2024

That doesn't happen for random reasons. Whatever is going on there might be model specific. Code doesn't just mysteriously "hang". It has a reason. Can you look in to this using jstack or post the model somewhere? I don't need anything secret just something to reproduce it.

Sure thing, here is the stack trace. I noticed that the stack trace showed ND4jCpu.execCustomOp2 so I changed the ND4J backend to use the GPU. The model loaded after that. I am using an LSTM network and have changed the file ext from .h5 to .txt to upload it here stacked_lstm - Copy.txt

2023-11-20 16:25:42
Full thread dump Java HotSpot(TM) 64-Bit Server VM (18.0.1+10-24 mixed mode, sharing):

"main" #1 prio=5 os_prio=0 cpu=150531.25ms elapsed=154.88s tid=0x000001fba9e95630 nid=9376 runnable  [0x0000005b5ecfe000]
   java.lang.Thread.State: RUNNABLE
        at org.nd4j.linalg.cpu.nativecpu.bindings.Nd4jCpu.execCustomOp2(Native Method)
        at org.nd4j.linalg.cpu.nativecpu.ops.NativeOpExecutioner.exec(NativeOpExecutioner.java:1900)
        at org.nd4j.linalg.cpu.nativecpu.ops.NativeOpExecutioner.exec(NativeOpExecutioner.java:1540)
        at org.nd4j.linalg.factory.Nd4j.exec(Nd4j.java:6545)
        at org.nd4j.linalg.api.rng.distribution.impl.OrthogonalDistribution.sample(OrthogonalDistribution.java:240)
        at org.nd4j.linalg.api.rng.distribution.impl.OrthogonalDistribution.sample(OrthogonalDistribution.java:255)
        at org.deeplearning4j.nn.weights.WeightInitDistribution.init(WeightInitDistribution.java:48)
        at org.deeplearning4j.nn.params.LSTMParamInitializer.init(LSTMParamInitializer.java:143)
        at org.deeplearning4j.nn.conf.layers.LSTM.instantiate(LSTM.java:82)
        at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:720)
        at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:605)
        at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:266)
        at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:255)
        at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasSequentialModelAndWeights(KerasModelImport.java:204)
        at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasSequentialModelAndWeights(KerasModelImport.java:89)
        at server.ServerMain.loadKerasModel(ServerMain.java:155)
        at server.ServerMain.main(ServerMain.java:55)

   Locked ownable synchronizers:
        - None

from deeplearning4j.

trevorrovert avatar trevorrovert commented on May 27, 2024

So I may have discovered something related to this issue. I was running into the same issues described above and read through the comments and saw the latest one from @Kali-Zoidberg, specifically how the issue was resolved by switching to a GPU backend. That got me thinking about the number of connections between the hidden layers of my model. I dropped my LSTM units from 100 down to 8 and this seemed to resolve my issue. I was able to bump the units up to 15 and still get the model to load on CPU but was not able to load at anything above 15. Hope this helps.

from deeplearning4j.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.