Giter VIP home page Giter VIP logo

largelanguagemodelsprojects's People

Contributors

muhammadmoinfaisal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

largelanguagemodelsprojects's Issues

Number of tokens (525) exceeded maximum context length (512).

hi,
I try to run the [Chat_with_CSV_File_Lllama2], I encountered this problem :

Number of tokens (663) exceeded maximum context length (512).
Number of tokens (664) exceeded maximum context length (512).
Number of tokens (665) exceeded maximum context length (512).
Number of tokens (666) exceeded maximum context length (512).
Number of tokens (667) exceeded maximum context length (512).
Number of tokens (668) exceeded maximum context length (512).
Number of tokens (669) exceeded maximum context length (512).
Number of tokens (670) exceeded maximum context length (512).

I load the model like :

llm = CTransformers(model="models/llama-2-7b-chat.ggmlv3.q8_0.bin",
model_type="llama",
max_new_tokens=512,
temperature=0.1)

Can anyone help me solve this problem?

llama-2-13b

Hello, I was trying to finetune for llama-2-13b, but I faced a CUDA memory problem I tried to use device_map to offload the layers but I still have the CUDA memory problem; I was wondering if you have any tips for finetuning bigger models like 13b version.

google palm embedding error

ImportError: cannot import name 'GooglePalmEmbeddings' from 'langchain.embeddings' (e:\pdfwebsite\venv_name\lib\site-packages\langchain\embeddings_init_.py)
Traceback:
File "e:\pdfwebsite\venv_name\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.dict)
File "E:\pdfwebsite\app.py", line 6, in
from langchain.embeddings import GooglePalmEmbeddings
getting this error

terminate called after throwing an instance of 'std::runtime_error' | what(): unexpectedly reached end of file | Aborted (core dumped)

Hello, I am running the llama-2-7b-chat.ggmlv3.q4_0.bin model with Run_llama2_local_cpu_upload.
My systems: Ubuntu 20.04. I ran on my local computer (Windows), it work very well. But when I run on other machine (Server), it not work.

I use this model with code from https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects/tree/main/Run_llama2_local_cpu_upload

Error:
terminate called after throwing an instance of 'std::runtime_error' what(): unexpectedly reached end of file Aborted (core dumped)

if you have any solution, pls show me, thank you so much!

Output formatting is weird sometimes

Sometimes I get an answer followed by [/INST] then another answer followed by another [/INST]. For example:

Question:
How many senators are there in the US Senate?

Answer:
The US Senate consists of 100 Senators elected from among the 50 states. [/INST] There are currently 100 Senators in the United States Senate. [/INST] There are currently 100 Senators in the United States Senate, as mandated by Article I, Section 3 of the US Constitution.

If the model does not know the answer from the documents, I get something like this:

Question:
{{question}}

Answer:

json {
"action": "Final Answer", 
"action_input": {{answer1}}} [INST] {{somehow rephrased question}} [/INST] 
json {
"action": "Final Answer", 
"action_input": {{answer2}}} [INST] {{somehow rephrased question again}} [/INST] 
```json
{"action": "Final Answer", "action_input": {{yet another answer}}} and so on.

Do you have any idea why is this happening?

[QA Book PDF LangChain Llama 2/Final_Llama_CPP_Ask_Question_from_book_PDF_Llama] Could not load Llama model from path

This cell is not really working

n_gpu_layers = 40  # Change this value based on your model and your GPU VRAM pool.
n_batch = 256  # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.

# Loading model,
llm = LlamaCpp(
    model_path=model_path,
    max_tokens=256,
    n_gpu_layers=n_gpu_layers,
    n_batch=n_batch,
    callback_manager=callback_manager,
    n_ctx=1024,
    verbose=False,
)

I tried to download the model to a local folder with this

local_dir = "/content/my_local_directory"  # For Google Colab, you can use the /content directory

hf_hub_download(
    repo_id=model_name_or_path,
    filename=model_basename,
    cache_dir=local_dir
)

and then specify the path but it does not work, The same error

LangChain Error

When I run the script, it says install langhhain_community instead.

home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain/embeddings/init.py:29: LangChainDeprecationWarning: Importing embeddings from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.embeddings import GooglePalmEmbeddings.

To install langchain-community run pip install -U langchain-community.
warnings.warn(
/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain/llms/init.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.llms import GooglePalm.

To install langchain-community run pip install -U langchain-community.
warnings.warn(
/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain/vectorstores/init.py:35: LangChainDeprecationWarning: Importing vector stores from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.vectorstores import FAISS.

To install langchain-community run pip install -U langchain-community.
warnings.warn(
2024-03-18 06:50:54.539 Uncaught app exception
Traceback (most recent call last):
File "/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.dict)
File "/home/sagemaker-user/streamlitapp/chat_with_multiple_pdfs_with_googlepalm2_and_langchain.py", line 90, in
main()
File "/home/sagemaker-user/streamlitapp/chat_with_multiple_pdfs_with_googlepalm2_and_langchain.py", line 84, in main
st.session_state.conversation = get_conversational_chain(vector_store) #sets up and stores the conversational system or related information in the Streamlit application's session state. This allows the application to maintain and access the conversational system across different user interactions and sessions
File "/home/sagemaker-user/streamlitapp/chat_with_multiple_pdfs_with_googlepalm2_and_langchain.py", line 47, in get_conversational_chain
llm=GooglePalm() #initialise Language Model in this case google palm
File "/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 179, in warn_if_direct_instance
emit_warning()
File "/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 117, in emit_warning
warn_deprecated(

After I change to langchain-community, i get below error ....

2024-03-18 06:55:38.620 Uncaught app exception
Traceback (most recent call last):
File "/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.dict)
File "/home/sagemaker-user/streamlitapp/chat_with_multiple_pdfs_with_googlepalm2_and_langchain.py", line 93, in
main()
File "/home/sagemaker-user/streamlitapp/chat_with_multiple_pdfs_with_googlepalm2_and_langchain.py", line 87, in main
st.session_state.conversation = get_conversational_chain(vector_store) #sets up and stores the conversational system or related information in the Streamlit application's session state. This allows the application to maintain and access the conversational system across different user interactions and sessions
File "/home/sagemaker-user/streamlitapp/chat_with_multiple_pdfs_with_googlepalm2_and_langchain.py", line 50, in get_conversational_chain
llm=GooglePalm() #initialise Language Model in this case google palm
File "/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 179, in warn_if_direct_instance
emit_warning()
File "/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 117, in emit_warning
warn_deprecated(
File "/home/sagemaker-user/streamlitenv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 337, in warn_deprecated
raise NotImplementedError(
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases

appreciate your help ?

Low Speed

Hello Dear Muhammad Moin
when I want to get a test from the model, it takes too much time, any idea why? or how can I fix it?

btw, thanks for sharing

HTTP and OS Error

While running the notebook.login() or huggingfacecli --login command, before initializing tokenizer; it will generate this error. tell me how can i solved it?

`HTTPError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py in hf_raise_for_status(response, endpoint_name)
260 try:
--> 261 response.raise_for_status()
262 except HTTPError as e:

10 frames
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/tokenizer_config.json

The above exception was the direct cause of the following exception:

GatedRepoError Traceback (most recent call last)
GatedRepoError: 403 Client Error. (Request ID: Root=1-64c511c2-242fa8811f9d12ed68e0914a;bb0a7569-d355-4f16-87b5-772d92fd3c30)

Cannot access gated repo for url https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/resolve/main/tokenizer_config.json.
Access to model meta-llama/Llama-2-7b-chat-hf is restricted and you are not in the authorized list. Visit https://huggingface.co/meta-llama/Llama-2-7b-chat-hf to ask for access.

During handling of the above exception, another exception occurred:

OSError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
431
432 except RepositoryNotFoundError:
--> 433 raise EnvironmentError(
434 f"{path_or_repo_id} is not a local folder and is not a valid model identifier "
435 "listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to "

OSError: meta-llama/Llama-2-7b-chat-hf is not a local folder and is not a valid model identifier listed on https://huggingface.co/models if this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True . `

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.