krishnaik06 / complete-langchain-tutorials Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU General Public License v2.0
License: GNU General Public License v2.0
In LLM Generic APP > test.ipynb in the method chunk_data
it is currently returning docs
and should return doc
. In its current form, doc
is an unused variable.
Currently this function doesn't do anything other than pass through the original docs
argument .
See below where doc
is currently being unused:
def chunk_data(docs,chunk_size=800,chunk_overlap=50):
text_splitter=RecursiveCharacterTextSplitter(chunk_size=chunk_size,chunk_overlap=chunk_overlap)
doc=text_splitter.split_documents(docs) # <-- doc unused
return docs # <-- returning initial docs argument
This snippet is throwing a valueError how can we resolve this any thoughts?
from langchain import HuggingFaceHub
llm_hf = HuggingFaceHub(huggingfacehub_api_token="HUGGINGFACEHUB_API_TOKEN",
repo_id="google/flan-t5-large",
model_kwargs={
"temperature":0,
"max_length":64
})
output=llm_hf.predict("Can you tell me the capital of Bangladesh")
print(output)
Output:
ValueError Traceback (most recent call last)
----> output=llm_hf.predict("Can you tell me the capital of Bangladesh")
print(output)
ValueError: Error raised by inference API: Authorization header is correct, but the token seems invalid
ValueError: source code string cannot contain null bytes
Traceback:
File "d:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\venv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in run_script
exec(code, module.dict)
File "D:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\app.py", line 5, in
from langchain_google_genai import GoogleGenerativeAIEmbeddings
File "d:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\venv\lib\site-packages\langchain_google_genai_init.py", line 58, in
from langchain_google_genai.enums import HarmBlockThreshold, HarmCategory
File "d:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\venv\lib\site-packages\langchain_google_genai_enums.py", line 1, in
import google.ai.generativelanguage_v1beta as genai
File "d:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\venv\lib\site-packages\google\ai\generativelanguage_v1bet
a_init.py", line 21, in
from .services.discuss_service import DiscussServiceAsyncClient, DiscussServiceClient
File "d:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\venv\lib\site-packages\google\ai\generativelanguage_v1beta\services\discuss_service_init_.py", line 16, in
from .async_client import DiscussServiceAsyncClient
File "d:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\venv\lib\site-packages\google\ai\generativelanguage_v1beta\services\discuss_service\async_client.py", line 32, in
from google.api_core import exceptions as core_exceptions
File "d:\PROJECTS\artificial_intelligence_projects\MULTI_PDF_CHATBOT\venv\lib\site-packages\google\api_core\exceptions.py", line 32, in
import grpc
after the execution of streamlit run app.py i get this error can anyone say how to solve this error please
chatmultipledocuments -- I try to run this code, but whatever question I am asking getting "{'output_text': 'Answer is not available in the context'}" this reply.
I have checked google_api_key which is working fine. Index created, but getting the above reply.
Please help me to know what went wrong.
RuntimeError: Error in __cdecl faiss::FileIOReader::FileIOReader(const char *) at D:\a\faiss-wheels\faiss-wheels\faiss\faiss\impl\io.cpp:68: Error: 'f' failed: could not open faiss_index\index.faiss for reading: No such file or directory
The Streamlit application for Chat with multiple PDFs has the issue while running I tried Figuring it out by manually creating a folder, hope sort it out soon
Hey Krish,
I took a quick look on your .ipynb and i want to notify you that you should remove the tokens. Maybe use the dotenv to load from .env file as mentionned in the app.
Keep up the good work really enjoy your tutorials. @krishnaik06
kr,
Fedi
ValueError: The de-serialization relies loading a pickle file. Pickle files can be modified to deliver a malicious payload that results in execution of arbitrary code on your machine.You will need to set allow_dangerous_deserialization
to True
to enable deserialization. If you do this, make sure that you trust the source of the data. For example, if you are loading a file that you created, and no that no one else has modified the file, then this is safe to do. Do not set this to True
if you are loading a file from an untrusted source (e.g., some random site on the internet.).
Traceback:
File "C:\Users\pmanne\Downloads\Gemini\gemini\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 584, in _run_script
exec(code, module.dict)
File "C:\Users\pmanne\Downloads\Gemini\geminipdfchatbot.py", line 107, in
main()
File "C:\Users\pmanne\Downloads\Gemini\geminipdfchatbot.py", line 92, in main
user_input(user_question)
File "C:\Users\pmanne\Downloads\Gemini\geminipdfchatbot.py", line 69, in user_input
new_db = FAISS.load_local("faiss_index", embeddings)
File "C:\Users\pmanne\Downloads\Gemini\gemini\lib\site-packages\langchain_community\vectorstores\faiss.py", line 1078, in load_local
raise ValueError(
Hello! I'm getting this error in the PDF app:
"ValueError: The de-serialization relies loading a pickle file. Pickle files can be modified to deliver a malicious payload that results in execution of arbitrary code on your machine.You will need to set allow_dangerous_deserialization
to True
to enable deserialization. If you do this, make sure that you trust the source of the data. For example, if you are loading a file that you created, and no that no one else has modified the file, then this is safe to do. Do not set this to True
if you are loading a file from an untrusted source (e.g., some random site on the internet.)."
I am using the same app.py, but there seems to be some issue with this line
llm = CTransformers( model_type='llama', model='models/llama-2-7b.ggmlv3.q8_0.bin', config={'max_new_tokens': 256, 'temperature': 0.01} )
this is giving following error:
Repository Not Found for url: https://huggingface.co/api/models/models/llama-2-7b-chat.ggmlv3.q8_0.bin/revision/main.
Please make sure you specified the correct repo_id
and repo_type
.
If you are trying to access a private or gated repo, make sure you are authenticated.
I have downloaded the model locally, how can I use it? @krishnaik06
getting error while installing langchain_google_genai -
ERROR: Could not find a version that satisfies the requirement langchain-google-genai (from versions: none)
ERROR: No matching distribution found for langchain-google-genai
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.