Giter VIP home page Giter VIP logo

Comments (10)

sitamgithub-MSIT avatar sitamgithub-MSIT commented on July 30, 2024 1

@sitamgithub-MSIT we haven't heard an update in a bit and just wondering if you're still working on the issue?

Yes I am working to it. I am checking this example in the hugging face for Gemma. I am thinking about reproducing the same for CodeGemma, though.

from xla.

sitamgithub-MSIT avatar sitamgithub-MSIT commented on July 30, 2024

/assigntome

from xla.

sitamgithub-MSIT avatar sitamgithub-MSIT commented on July 30, 2024

@duncantech I am thinking of training the latest Gemma model, indeed, with Pytorch XLA. Is it okay then?

from xla.

JackCaoG avatar JackCaoG commented on July 30, 2024

I think the geema model should work out of box. Take a look at https://github.com/google/gemma_pytorch#try-it-out-with-pytorchxla. Feel free to give it a try and see if we can improve anything.

from xla.

sitamgithub-MSIT avatar sitamgithub-MSIT commented on July 30, 2024

I think the geema model should work out of box. Take a look at https://github.com/google/gemma_pytorch#try-it-out-with-pytorchxla. Feel free to give it a try and see if we can improve anything.

Ok. I will look into the gemma part.

For a different model I am trying with, a few things I need to know: do I need to use any free cloud tpu provider, for example, Kaggle or Colab tpu, or is it necessary to do it with the v5 in Google Cloud?

from xla.

JackCaoG avatar JackCaoG commented on July 30, 2024

That part I think @duncantech can answer.

from xla.

duncantech avatar duncantech commented on July 30, 2024

You can work with a free TPU provider if you'd like to get things started.

We should also be able to give a small amount of v5es to try with too.

from xla.

duncantech avatar duncantech commented on July 30, 2024

@sitamgithub-MSIT we haven't heard an update in a bit and just wondering if you're still working on the issue?

from xla.

sitamgithub-MSIT avatar sitamgithub-MSIT commented on July 30, 2024

@duncantech I am preparing a script to run in tpus. So as I am using Codegema, it comes with 7b parameters, so it will not fit in Colab unless we use a 4-bit version of that. So should I use the bits and bytes configuration for that? Or should I just train it in the cloud and see if everything works?

from xla.

duncantech avatar duncantech commented on July 30, 2024

You can try with the 4-but version and see what the performance is like since that would be easier for others to run in the future!

from xla.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.