gabacode / chatpdf Goto Github PK
View Code? Open in Web Editor NEWLoad a PDF file and ask questions via llama_index and GPT
Load a PDF file and ask questions via llama_index and GPT
Can we do it on Image?
Is there a limit on the length of the response? I am only getting a response of about 200 words…
can analyze more than 1 file ?
D:\Users\doctore\Documents\GitHub\chatPDF>python3 main.py
Traceback (most recent call last):
File "D:\Users\doctore\Documents\GitHub\chatPDF\main.py", line 17, in
openai.api_key = os.environ["OPENAI_API_KEY"]
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\os.py", line 680, in getitem
raise KeyError(key) from None
KeyError: 'OPENAI_API_KEY'
I followed all the instrcution but came up this
try to fix but do not work
Hi guys,
I'm trying your stuff right now but got this error. I'm using your hacker manifesto pdf file. Any ideas ?
PS D:\Informatique\chatPDF> python main.py
'clear' n’est pas reconnu en tant que commande interne
ou externe, un programme exécutable ou un fichier de commandes.
📁 Select a file
1. file.pdf
Enter a number, or 0 to exit: 1
👀 Loading...
Traceback (most recent call last):
File "D:\Informatique\chatPDF\main.py", line 71, in <module>
ask(file)
File "D:\Informatique\chatPDF\main.py", line 37, in ask
PDFReader = download_loader("PDFReader")
File "C:\Users\Thoma\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\readers\download.py", line 125, in download_loader
library = json.loads(library_raw_content)
File "C:\Users\Thoma\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\Thoma\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 4 (char 3)
Traceback (most recent call last):
File "/Users/aquaticdwarf/Downloads/chatPDF-main/main.py", line 99, in
ask(file)
File "/Users/aquaticdwarf/Downloads/chatPDF-main/main.py", line 38, in ask
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="text-davinci-003"))
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for OpenAI
root
Did not find openai_api_key, please add an environment variable OPENAI_API_KEY
which contains it, or pass openai_api_key
as a named parameter. (type=value_error)
openai.api_key = os.environ["API_KEY"] = [REDACTED]
when asked for
"Who is the first president of USA",
it answers as:
"The context information provided does not mention anything about the first president of United States..."
I think there is a tweak to use llm model when the answer is not in the context.
Anyone knows?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.