Giter VIP home page Giter VIP logo

iusztinpaul / hands-on-llms Goto Github PK

View Code? Open in Web Editor NEW
2.8K 46.0 445.0 26.24 MB

๐Ÿฆ– ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป about ๐—Ÿ๐—Ÿ๐— ๐˜€, ๐—Ÿ๐—Ÿ๐— ๐—ข๐—ฝ๐˜€, and ๐˜ƒ๐—ฒ๐—ฐ๐˜๐—ผ๐—ฟ ๐——๐—•๐˜€ for free by designing, training, and deploying a real-time financial advisor LLM system ~ ๐˜ด๐˜ฐ๐˜ถ๐˜ณ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ฅ๐˜ฆ + ๐˜ท๐˜ช๐˜ฅ๐˜ฆ๐˜ฐ & ๐˜ณ๐˜ฆ๐˜ข๐˜ฅ๐˜ช๐˜ฏ๐˜จ ๐˜ฎ๐˜ข๐˜ต๐˜ฆ๐˜ณ๐˜ช๐˜ข๐˜ญ๐˜ด

License: MIT License

Python 7.60% Makefile 0.40% Shell 0.47% Dockerfile 0.03% Jupyter Notebook 91.50%
bytewax comet-ml huggingface mlops qdrant transformers beam langchain generative-ai llms

hands-on-llms's Introduction

Paul Iusztin

banner

Senior Machine Learning Engineer โ€ข MLOps โ€ข Founder @ Decoding ML ~ Courses and articles about building production-grade ML/AI systems.


Views

My passion is to:

  • Design and implement production AI/ML systems using MLOps best practices.

  • Teach people about the process.

๐Ÿ”— More about me at https://pauliusztin.me


Socials

gmail medium linkedin x linkedin




Email: [email protected]


DML Logo

Founder @ Decoding ML

โ†’ A channel for battle-tested content on designing, coding, and deploying production-grade ML & MLOps systems.

Join Decoding ML for free articles and tutorials on production-grade AI, ML and MLOps at:


๐ŸŽจ Creating content takes me a lot of time. If you enjoyed my work, consider supporting me by buying me a coffee.

hands-on-llms's People

Contributors

bhadreshpsavani avatar eltociear avatar iusztinpaul avatar joywalker avatar laziale2 avatar paulescu avatar plantbasedtendies avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hands-on-llms's Issues

"make run_batch" in the streaming_pipeline gives error

I get the following 2 errors in the streaming_pipeline while trying to run "make run_batch":

~/projects/hands-on-llms/modules/streaming_pipeline$ make run_batch
RUST_BACKTRACE=full poetry run python -m bytewax.run -p4 "tools.run_batch:build_flow(latest_n_days=1)"
2024-07-07 16:20:07,355 - INFO - Initializing env vars...
2024-07-07 16:20:07,355 - INFO - Loading environment variables from: .env
2024-07-07 16:20:07,356 - INFO - Extracting news from 2024-07-06 16:20:07.356816 to 2024-07-07 16:20:07.356816 [n_days=1]
2024-07-07 16:20:09,841 - INFO - HTTP Request: GET https://4007c426-1f83-4326-9500-ac1ae11ac9e6.us-east4-0.gcp.cloud.qdrant.io:6333/collections/alpaca_financial_news "HTTP/2 200 OK"
Traceback (most recent call last):
File "/home/patilmh/.cache/pypoetry/virtualenvs/streaming-pipeline-qRv2lzOY-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 88, in send
return parse_as_type(response.json(), type_)
File "/home/patilmh/.cache/pypoetry/virtualenvs/streaming-pipeline-qRv2lzOY-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 201, in parse_as_type
return model_type(obj=obj).obj
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 2 validation errors for ParsingModel[InlineResponse2005] (for parse_as_type)
obj -> result -> vectors_count
field required (type=value_error.missing)
obj -> result -> config -> optimizer_config -> max_optimization_threads
none is not an allowed value (type=type_error.none.not_allowed)

I tried using the solution posted in #76 where @dvquy13 suggested initializing max_optimization_threads, but that did not fix this problem. Also note that issue #72 was only for 1 error:

obj -> result -> config -> optimizer_config -> max_optimization_threads
  none is not an allowed value (type=type_error.none.not_allowed)

That issue did not mention type=value_error.missing for obj -> result -> vectors_count. Please does anyone have any suggestions?

running locally on windows runs into errors at each step!

Environment

  • OS : Windows 11
  • Python Version : 3.11

Problems and Errors

Make install

Output :

poetry is not recognized. Tried installing with pip install , it worked !

But

make install
process_begin: CreateProcess(NULL, which python3.10, ...) failed.
"Installing training pipeline..."
poetry env use  && \
        PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring poetry install && \
        poetry run pip install torch==2.0.1

Not enough arguments (missing: "python")
make: *** [install] Error 1

CM and CT

In your pipeline design, how do you implement Continuous Monitoring and Continuous Training, to account for language model drift over time?

What is history in question for API call

What is history in below query? Is this something app should get from vector DB? Where to store user chat history?

{"about_me": "I am a student and I have some money that I want to invest.", "question": "Should I consider investing in stocks from the Tech Sector?", "history": [["What is your opinion on investing in startup companies?", "Startup investments can be very lucrative, but they also come with a high degree of risk. It is important to do your due diligence and research the company thoroughly before investing."]]}

TypeError: 'NoneType' object is not subscriptable

`Traceback` (most recent call last):
  File "/mnt/cephfs/home/shixun2024/miniconda3/envs/GengN/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/mnt/cephfs/home/shixun2024/miniconda3/envs/GengN/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/mnt/cephfs/home/shixun2024/users/GengNan/hands-on-llms/modules/training_pipeline/tools/train_run.py", line 83, in <module>
    fire.Fire(train)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/beam/app.py", line 1346, in wrapper
    return func(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/users/GengNan/hands-on-llms/modules/training_pipeline/tools/train_run.py", line 79, in train
    training_api.train()
  File "/mnt/cephfs/home/shixun2024/users/GengNan/hands-on-llms/modules/training_pipeline/training_pipeline/api/training.py", line 228, in train
    trainer.train()
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
    return inner_training_loop(
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2654, in training_step
    loss = self.compute_loss(model, inputs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2679, in compute_loss
    outputs = model(**inputs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 171, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 181, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 89, in parallel_apply
    output.reraise()
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/_utils.py", line 644, in reraise
    raise exception
TypeError: Caught TypeError in replica 0 on device 0.
Original Traceback (most recent call last):
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker
    output = module(*input, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/peft/peft_model.py", line 922, in forward
    return self.base_model(
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 900, in forward
    transformer_outputs = self.transformer(
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 789, in forward
    outputs = torch.utils.checkpoint.checkpoint(
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint
    return CheckpointFunction.apply(function, preserve, *args)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 107, in forward
    outputs = run_function(*args)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 785, in custom_forward
    return module(*inputs, use_cache=use_cache, output_attentions=output_attentions)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 453, in forward
    attn_outputs = self.self_attention(
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 307, in forward
    query_layer, key_layer = self.maybe_rotary(query_layer, key_layer, past_kv_length)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 107, in forward
    cos, sin = self.cos_sin(seq_len, past_key_values_length, query.device, query.dtype)
  File "/mnt/cephfs/home/shixun2024/.cache/pypoetry/virtualenvs/training-pipeline-0MYnI07r-py3.10/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 101, in cos_sin
    self.cos_cached[:, past_key_values_length : seq_len + past_key_values_length],
TypeError: 'NoneType' object is not subscriptable

I always meet this problem,how can i solve it?

Missing train_finqa.py file required for dev_train_beam target in Makefile

Description

I am trying to execute the dev_train_beam target from the Makefile, but it seems like the train_finqa.py file, which is supposed to be located in the ./tools directory, is missing from the repository.

Steps to Reproduce

  1. Navigate to the training_pipeline directory.
  2. Run the command make dev_train_beam.

Expected Behavior

The make dev_train_beam command should execute successfully by running the Beam training pipeline using the train_finqa.py script.

Actual Behavior

An error is thrown indicating that the train_finqa.py file is not found:

Additional Information

  • I have searched through the repository to find the train_finqa.py file but was unable to locate it.
  • There is no documentation explaining the absence of the train_finqa.py file or how to generate or locate it.

Questions

  • Should the train_finqa.py file be included in the repository?
  • Are there any steps or commands that need to be executed to generate or locate the train_finqa.py file?
  • Is there any additional documentation available that explains how to set up and execute the dev_train_beam target?

Thank you for your assistance!

Running on aws ubuntu

RUST_BACKTRACE=full poetry run python -m bytewax.run tools.run_real_time:build_flow
2024-03-16 01:16:27,321 - INFO - Initializing env vars...
2024-03-16 01:16:27,322 - INFO - Loading environment variables from: .env
2024-03-16 01:16:30,824 - INFO - HTTP Request: GET https://475ea256-3440-4488-8d03-627b9c0f6ba0.us-east4-0.gcp.cloud.qdrant.io:6333/collections/alpaca_financial_news "HTTP/2 200 OK"
Traceback (most recent call last):
File "/home/ubuntu/.cache/pypoetry/virtualenvs/streaming-pipeline-En05NEtH-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 88, in send
return parse_as_type(response.json(), type_)
File "/home/ubuntu/.cache/pypoetry/virtualenvs/streaming-pipeline-En05NEtH-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 201, in parse_as_type
return model_type(obj=obj).obj
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for ParsingModel[InlineResponse2005] (for parse_as_type)
obj -> result -> config -> optimizer_config -> max_optimization_threads
none is not an allowed value (type=type_error.none.not_allowed)

Exception: none is not an allowed value (type=type_error.none.not_allowed)

When I executed "make run_real_time" in folder "hands-on-llms/modules/streaming_pipeline", following instruction, an exception came up.

RUST_BACKTRACE=full poetry run python -m bytewax.run tools.run_real_time:build_flow
2024-03-12 10:14:28,492 - INFO - Initializing env vars...
2024-03-12 10:14:28,492 - INFO - Loading environment variables from: .env
2024-03-12 10:14:31,429 - INFO - HTTP Request: GET https://e4bf1959-f74c-490c-9f46-00897295ec9e.us-east4-0.gcp.cloud.qdrant.io:6333/collections/alpaca_financial_news "HTTP/2 200 OK"
Traceback (most recent call last):
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 88, in send
    return parse_as_type(response.json(), type_)
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 201, in parse_as_type
    return model_type(obj=obj).obj
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ParsingModel[InlineResponse2005] (for parse_as_type)
obj -> result -> config -> optimizer_config -> max_optimization_threads
  none is not an allowed value (type=type_error.none.not_allowed)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/bytewax/run.py", line 430, in <module>
    kwargs["flow"] = _locate_dataflow(module_str, attrs_str)
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/bytewax/run.py", line 204, in _locate_dataflow
    dataflow = attr(*args, **kwargs)
  File "/media/pactera/shared/Ubuntu/handsonllms/hands-on-llms/modules/streaming_pipeline/tools/run_real_time.py", line 26, in build_flow
    flow = flow_builder(model_cache_dir=model_cache_dir, debug=debug)
  File "/media/pactera/shared/Ubuntu/handsonllms/hands-on-llms/modules/streaming_pipeline/streaming_pipeline/flow.py", line 57, in build
    flow.output("output", _build_output(model, in_memory=debug))
  File "/media/pactera/shared/Ubuntu/handsonllms/hands-on-llms/modules/streaming_pipeline/streaming_pipeline/flow.py", line 90, in _build_output
    return QdrantVectorOutput(
  File "/media/pactera/shared/Ubuntu/handsonllms/hands-on-llms/modules/streaming_pipeline/streaming_pipeline/qdrant.py", line 42, in __init__
    self.client.get_collection(collection_name=self._collection_name)
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 844, in get_collection
    return self._client.get_collection(collection_name=collection_name, **kwargs)
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/qdrant_remote.py", line 1566, in get_collection
    result: Optional[types.CollectionInfo] = self.http.collections_api.get_collection(
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/http/api/collections_api.py", line 838, in get_collection
    return self._build_for_get_collection(
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/http/api/collections_api.py", line 336, in _build_for_get_collection
    return self.api_client.request(
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 68, in request
    return self.send(request, type_)
  File "/media/pactera/shared/Ubuntu/python_poetry/cache/virtualenvs/streaming-pipeline-q1Qy2sa1-py3.10/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 90, in send
    raise ResponseHandlingException(e)
qdrant_client.http.exceptions.ResponseHandlingException: 1 validation error for ParsingModel[InlineResponse2005] (for parse_as_type)
obj -> result -> config -> optimizer_config -> max_optimization_threads
  none is not an allowed value (type=type_error.none.not_allowed)
make: *** [Makefile:31: run_real_time] Error 1

Why was this happening and what should I do? Thank you.

Using fine-tuned model for inference

Hi @iusztinpaul,

Love the course so far!

I have a question: Shouldn't us use our fine-tuned model for inferenced instead of using Paul's peft model here?

id: iusztinpaul/fin-falcon-7b-lora:1.0.5

If yes then how should we publish our model from experiment to Comet Model Registry? Is it done manually via the Register Model button in the Comet experiment console view?
image

Thanks!

Cannot edit .env file

Cannot edit .env file
After I create the .env using copy .env.example to .env

cannot edit any detials dont understand whats the issue

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.