Giter VIP home page Giter VIP logo

Comments (17)

drbh avatar drbh commented on August 17, 2024 1

Hi @mhou7712 thanks for opening this issue. Currently the lora feature only supports adapter_ids, however there is an open PR that includes the ability to load a lora from a local directory #2193 along with other refactors/updates. Once merged this issue should be resolved.

for example local loras will be specified like name=/path,name2=/path

text-generation-launcher \
--hostname 0.0.0.0 \
-p 5029 \
-e  \
--lora-adapters myadapter=/var/spool/llm_models/checkpoint-576  \
--model-id /var/spool/llm_models/Mistral-7B-v0.1_032124 \
--cuda-memory-fraction 0.90 \
--max-total-tokens 5000 \
--max-input-length 4096

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024 1

Hi @drbh , thanks for getting back to me.

Reviewed the example about --lora-adaptors, it is wonderful to have an adaptor_id per the directory.

Can I assume the following configuration will be implemented when I have multi adaptor_ids?

text-generation-launcher \ --hostname 0.0.0.0 \ -p 5029 \ -e \ --lora-adapters myadapter=/var/spool/llm_models/checkpoint-576,myadapter1=/var/spool/llm_models/checkpoint-577 \ --model-id /var/spool/llm_models/Mistral-7B-v0.1_032124 \ --cuda-memory-fraction 0.90 \ --max-total-tokens 5000 \ --max-input-length 4096

Again, thanks a lots!!

from text-generation-inference.

drbh avatar drbh commented on August 17, 2024 1

@mhou7712 yep that is correct!

*note lora is still a new feature and there will also be more small updates/improvements coming to lora in the following weeks

from text-generation-inference.

drbh avatar drbh commented on August 17, 2024 1

Hi @mhou7712 thanks for the follow up, incidentally the lora updates were not included in the recent release since other changes were priority.

The PR to enable local models #2193 should be merged soon. Apologies for the delay, and will drop any updates here!

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

I tried the example as the web example for LoRA:

text-generation-launcher --hostname 0.0.0.0 -p 5029 -e --lora-adapters "predibase/customer_support" --model-id "/var/spool/llm_models/Mistral-7B-v0.1_032124" --cuda-memory-fraction 0.90 --max-total-tokens 5000 --max-input-length 4096

It worked fine.

So, the adaptor can load model file from repo but not from local.

from text-generation-inference.

Egelvein avatar Egelvein commented on August 17, 2024

+, the same problem

from text-generation-inference.

flozi00 avatar flozi00 commented on August 17, 2024

Are you using docker enviroments ?

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

yes and downloaded from "ghcr.io/huggingface/text-generation-inference:2.1.0".

Question: can the adaptor load model files from local instead repo? Thanks.

from text-generation-inference.

flozi00 avatar flozi00 commented on August 17, 2024

Maybe you didn't mounted the folder containing the weights as volume into the container ?

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

Please compare the following two command lines:

LoRA with repo:
text-generation-launcher --hostname 0.0.0.0 -p 5029 -e --lora-adapters "predibase/customer_support" --model-id "/var/spool/llm_models/Mistral-7B-v0.1_032124" --cuda-memory-fraction 0.90 --max-total-tokens 5000 --max-input-length 4096

LoRA with local:
text-generation-launcher --hostname 0.0.0.0 -p 5029 -e --lora-adapters "/var/spool/llm_models/checkpoint-576" --model-id "/var/spool/llm_models/Mistral-7B-v0.1_032124" --cuda-memory-fraction 0.90 --max-total-tokens 5000 --max-input-length 4096

"LoRA with repo" works with me and "/var/spool/llm_models/Mistral-7B-v0.1_032124" is visible inside the container for the base model, and the same volume "/var/spool/llm_models/" is visible for "LoRA with local". Yes ,"checkpoint-576" is accessible under "/var/spool/llm_models" inside the container.

Thanks for asking and I did check all model files are available inside container.

from text-generation-inference.

bwhartlove avatar bwhartlove commented on August 17, 2024

+1 same issue

from text-generation-inference.

newsbreakDuadua9 avatar newsbreakDuadua9 commented on August 17, 2024

same issue here
The whole feature is not working in docker env. Even passing a random string as adapter_id, the inference client would still accept it. The lora is not enabled at all!

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

@newsbreakDuadua9 a quick question if adaptor_id is not a repo then adaptor assumes something (I mean something can be the local filesystem directory or it does not exist), right?

I have not tested if I pass a random string with --model-id. Yea, it is a good test.

Thanks.

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

@flozi00 I am wondering that this issue can be assigned with the adaptor expert so he/she can help us for looking into the issue.
Thanks.

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

Greeting @drbh, please let me know if any information that I can provide in order to get this issue moved forward. Thanks.

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

Hi @Narsil, I would like to follow-up when we are going to have the next release included 071842e? Thanks.

from text-generation-inference.

mhou7712 avatar mhou7712 commented on August 17, 2024

Hi @drbh, I have tried the v2.2.0 build with the following command:

`

text-generation-launcher --hostname 0.0.0.0 -p 5029 -e \
--lora-adapters llm_sql_inference=/var/spool/llm_models/md1fttrain/checkpoint-6770,llm_reasoning=/var/spool/llm_models/md2fttrain/checkpoint-360 \
--model-id "/var/spool/llm_models/Meta-Llama-3-8B-Instruct"

`

with the following error below:

`

xpu-smi:
N/A
2024-07-24T13:05:58.718927Z INFO text_generation_launcher: Args {
model_id: "/var/spool/llm_models/Meta-Llama-3-8B-Instruct",
revision: None,
validation_workers: 2,
sharded: None,
num_shard: None,
quantize: None,
speculate: None,
dtype: None,
trust_remote_code: false,
max_concurrent_requests: 128,
max_best_of: 2,
max_stop_sequences: 4,
max_top_n_tokens: 5,
max_input_tokens: None,
max_input_length: None,
max_total_tokens: None,
waiting_served_ratio: 0.3,
max_batch_prefill_tokens: None,
max_batch_total_tokens: None,
max_waiting_tokens: 20,
max_batch_size: None,
cuda_graphs: None,
hostname: "0.0.0.0",
port: 5029,
shard_uds_path: "/tmp/text-generation-server",
master_addr: "localhost",
master_port: 29500,
huggingface_hub_cache: Some(
"/data",
),
weights_cache_override: None,
disable_custom_kernels: false,
cuda_memory_fraction: 1.0,
rope_scaling: None,
rope_factor: None,
json_output: false,
otlp_endpoint: None,
otlp_service_name: "text-generation-inference.router",
cors_allow_origin: [],
watermark_gamma: None,
watermark_delta: None,
ngrok: false,
ngrok_authtoken: None,
ngrok_edge: None,
tokenizer_config_path: None,
disable_grammar_support: false,
env: true,
max_client_batch_size: 4,
lora_adapters: Some(
"llm_sql_inference=/var/spool/llm_models/md1fttrain/checkpoint-6770,llm_reasoning=/var/spool/llm_models/md2fttrain/checkpoint-360",
),
disable_usage_stats: false,
disable_crash_reports: false,
}
2024-07-24T13:05:58.720021Z INFO text_generation_launcher: Model supports up to 16384 but tgi will now set its default to 4096 instead. This is to save VRAM by refusing large prompts in order to allow more users on the same hardware. You can increase that size using --max-batch-prefill-tokens=16434 --max-total-tokens=16384 --max-input-tokens=16383.
2024-07-24T13:05:58.720043Z INFO text_generation_launcher: Default max_input_tokens to 4095
2024-07-24T13:05:58.720048Z INFO text_generation_launcher: Default max_total_tokens to 4096
2024-07-24T13:05:58.720053Z INFO text_generation_launcher: Default max_batch_prefill_tokens to 4145
2024-07-24T13:05:58.720058Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]
2024-07-24T13:05:58.720295Z INFO download: text_generation_launcher: Starting check and download process for /var/spool/llm_models/Meta-Llama-3-8B-Instruct
2024-07-24T13:06:02.459860Z INFO text_generation_launcher: Files are already present on the host. Skipping download.
2024-07-24T13:06:03.226757Z INFO download: text_generation_launcher: Successfully downloaded weights for /var/spool/llm_models/Meta-Llama-3-8B-Instruct
2024-07-24T13:06:03.226887Z INFO download: text_generation_launcher: Starting check and download process for llm_sql_inference=/var/spool/llm_models/md1fttrain/checkpoint-6770
2024-07-24T13:06:08.155920Z ERROR download: text_generation_launcher: Download encountered an error:
2024-07-24 13:06:05.742 | INFO | text_generation_server.utils.import_utils::75 - Detected system cuda
Traceback (most recent call last):

File "/opt/conda/bin/text-generation-server", line 8, in
sys.exit(app())

File "/opt/conda/lib/python3.10/site-packages/text_generation_server/cli.py", line 160, in download_weights
utils.weight_files(model_id, revision, extension)

File "/opt/conda/lib/python3.10/site-packages/text_generation_server/utils/hub.py", line 187, in weight_files
filenames = weight_hub_files(model_id, revision, extension)

File "/opt/conda/lib/python3.10/site-packages/text_generation_server/utils/hub.py", line 146, in weight_hub_files
info = api.model_info(model_id, revision=revision)

File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)

File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
raise HFValidationError(

huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'llm_sql_inference=/var/spool/llm_models/md1fttrain/checkpoint-6770'. Use repo_type argument if needed.

Error: DownloadError

`

Thanks.

from text-generation-inference.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.