Giter VIP home page Giter VIP logo

ftpipe's People

Contributors

alondj avatar benbanuz avatar or9382 avatar saar-eliad avatar saareliad avatar snover98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ftpipe's Issues

CPU support?

Hi, very neat project.

Question: is it possible to use FTPipe with massively parallel CPU clusters? Say for example 256 VMs?

After partitioning, How to get training dataset and some errors within training encountered.

  1. exec: python -m pipe.data.download_glue_data,
    will encounter some errors such as: 400 HTTP error. I reviewed the code such as:
    TASK2PATH = {"CoLA":'https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FCoLA.zip?alt=media&token=46d5e637-3411-4188-bc44-5809b5bfb5f4'}, the url is really wrong.

  2. Can you provide more detailed Readme about training including training dataset.

Is more balanced partitioning plan for T5-3B-16GPUs possible?

Hi~Thanks for your neat project!
I was trying to partition T5-3B model. When partitioning it on 8 GPUs, I used

python -m autopipe.partition t5 --model_name_or_path t5-3b --t5_task squad1 --lmhead --n_iter 10 --analysis_batch_size 4 --partitioning_batch_size 4 --ct trace_cache_t53b_512_4_op --cp prof_cache_t53b_512_4_op_ftpipe --precompute_masks --stateless_tied --lmhead --n_partitions 8 --L 8 16 24 --max_seq_length 512 --answer_max_seq_length 4 --partitioning_method mpipe --preset ftpipe --dont_use_async_meta_alg --save_memory_mode --special_blocks T5Block

and got a pretty balanced plan:
スクリーンショット 2022-06-26 13 22 41.

But when partitioning it on 16 GPUs, I used

python -m autopipe.partition t5 --model_name_or_path t5-3b --t5_task squad1 --lmhead --n_iter 10 --analysis_batch_size 4 --partitioning_batch_size 4 --ct trace_cache_t53b_512_4_op --cp prof_cache_t53b_512_4_op_ftpipe --precompute_masks --stateless_tied --lmhead --n_partitions 16 --L 16 32 48 --max_seq_length 512 --answer_max_seq_length 4 --partitioning_method mpipe --preset ftpipe --dont_use_async_meta_alg --save_memory_mode --special_blocks T5Block

and got a less balanced plan:

スクリーンショット 2022-06-26 13 52 17.

I understand that T5-3B model has 24 encoders and 24 decoders, so when partitioning on 8 GPUs, each GPU can be assigned 3 encoders and 3 decoders, which is very similar to what FTPipe has done. When partitioning on 16 GPUs, however, by reading the partitioned model generated by FTPipe, I found that each GPU is roughly assigned 3 encoders or 3 decoders, resulting in a less balanced plan.

Question: Is it possible to get a more balanced partitioning plan for T5-3B-16GPUs using FTPipe?

ModuleNotFoundError: No module named 'pipe.models.oldt5'

When I run the command

python -m pipe.main --help

That error appeared, and here is the whole error information.

Traceback (most recent call last):
  File "/home/sun/.conda/envs/nompi/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/sun/.conda/envs/nompi/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/sun/data/FTPipe/pipe/main.py", line 16, in <module>
    from pipe.data import add_dataset_argument
  File "/home/sun/data/FTPipe/pipe/data/__init__.py", line 6, in <module>
    from .from_args_and_kw import *
  File "/home/sun/data/FTPipe/pipe/data/from_args_and_kw.py", line 7, in <module>
    from pipe.models.simple_partitioning_config import PipelineConfig
  File "/home/sun/data/FTPipe/pipe/models/__init__.py", line 1, in <module>
    from . import transformers_utils
  File "/home/sun/data/FTPipe/pipe/models/transformers_utils.py", line 3, in <module>
    from .transformers_cfg import MODEL_TYPES
  File "/home/sun/data/FTPipe/pipe/models/transformers_cfg.py", line 883, in <module>
    from pipe.models.oldt5 import oldt5_functions_list
ModuleNotFoundError: No module named 'pipe.models.oldt5'

It seems lacking a source file in the dir pipe/models/.

CUDA-Aware MPI instead of NCCL

Hi, I wonder why the P2P communication in FTPipe is implemented by CUDA-Aware MPI instead of NCCL?
Maybe perform better or anything else?

BTW, can I run this repo without re-compiling CUDA-Aware MPI and PyTorch?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.